Cross-Lingual Abstractive Summarization

6 papers with code • 4 benchmarks • 2 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

WikiLingua: A New Benchmark Dataset for Cross-Lingual Abstractive Summarization

esdurmus/Wikilingua Findings of the Association for Computational Linguistics 2020

As a set of baselines for further studies, we evaluate the performance of existing cross-lingual abstractive summarization methods on our dataset.

Cross-Lingual Abstractive Summarization with Limited Parallel Resources

WoodenWhite/MCLAS ACL 2021

Employing one unified decoder to generate the sequential concatenation of monolingual and cross-lingual summaries, MCLAS makes the monolingual summarization task a prerequisite of the cross-lingual summarization (CLS) task.

Towards Making the Most of Multilingual Pretraining for Zero-Shot Neural Machine Translation

ghchen18/acl22-sixtp 16 Oct 2021

When applied to zero-shot cross-lingual abstractive summarization, it produces an average performance gain of 12. 3 ROUGE-L over mBART-ft. We conduct detailed analyses to understand the key ingredients of SixT+, including multilinguality of the auxiliary parallel data, positional disentangled encoder, and the cross-lingual transferability of its encoder.

CrossSum: Beyond English-Centric Cross-Lingual Abstractive Text Summarization for 1500+ Language Pairs

csebuetnlp/crosssum 16 Dec 2021

We present CrossSum, a large-scale cross-lingual abstractive summarization dataset comprising 1. 7 million article-summary samples in 1500+ language pairs.

WikiMulti: a Corpus for Cross-Lingual Summarization

tikhonovpavel/wikimulti 23 Apr 2022

Cross-lingual summarization (CLS) is the task to produce a summary in one particular language for a source document in a different language.