IndicBART: A Pre-trained Model for Indic Natural Language Generation

ACL ARR November 2021  ·  Anonymous ·

We study pre-trained sequence-to-sequence model for a specific-language family with a focus on Indic languages. We present IndicBART, a multilingual, sequence-to-sequence pre-trained model focusing on 11 Indic languages and English. IndicBART utilizes the orthographic similarity between Indic scripts to improve transfer learning between similar Indic languages. We evaluate IndicBART on two NLG tasks: Neural Machine Translation (NMT) and extreme summarization. Our experiments on NMT and extreme summarization show that a language family-specific model like IndicBART is competitive with large pre-trained models like mBART50 despite being significantly smaller. It also performs well on very low-resource translation scenarios: languages not included in pre-training or fine-tuning. Script sharing, multilingual training and better utilization of limited model capacity contribute to the good performance of the compact IndicBART model.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here