Cross-Lingual Abstractive Summarization
6 papers with code • 4 benchmarks • 2 datasets
Most implemented papers
WikiLingua: A New Benchmark Dataset for Cross-Lingual Abstractive Summarization
As a set of baselines for further studies, we evaluate the performance of existing cross-lingual abstractive summarization methods on our dataset.
Cross-Lingual Abstractive Summarization with Limited Parallel Resources
Employing one unified decoder to generate the sequential concatenation of monolingual and cross-lingual summaries, MCLAS makes the monolingual summarization task a prerequisite of the cross-lingual summarization (CLS) task.
Towards Making the Most of Multilingual Pretraining for Zero-Shot Neural Machine Translation
When applied to zero-shot cross-lingual abstractive summarization, it produces an average performance gain of 12. 3 ROUGE-L over mBART-ft. We conduct detailed analyses to understand the key ingredients of SixT+, including multilinguality of the auxiliary parallel data, positional disentangled encoder, and the cross-lingual transferability of its encoder.
CrossSum: Beyond English-Centric Cross-Lingual Abstractive Text Summarization for 1500+ Language Pairs
We present CrossSum, a large-scale cross-lingual abstractive summarization dataset comprising 1. 7 million article-summary samples in 1500+ language pairs.
WikiMulti: a Corpus for Cross-Lingual Summarization
Cross-lingual summarization (CLS) is the task to produce a summary in one particular language for a source document in a different language.