Wikipedia Summarization

1 papers with code • 1 benchmarks • 1 datasets

This task has no description! Would you like to contribute one?

Datasets


Most implemented papers

IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation

gsarti/it5 7 Mar 2022

The T5 model and its unified text-to-text paradigm contributed in advancing the state-of-the-art for many natural language processing tasks.