no code implementations • 11 Nov 2023 • Xiaoqian Li, Ercong Nie, Sheng Liang
The remarkable ability of Large Language Models (LLMs) to understand and follow instructions has sometimes been limited by their in-context learning (ICL) performance in low-resource languages.
no code implementations • 1 Nov 2023 • Xiaoqian Li, Ercong Nie, Sheng Liang
The promise of Large Language Models (LLMs) in Natural Language Processing has often been overshadowed by their limited performance in low-resource languages such as Bangla.
no code implementations • 15 Oct 2023 • Nadezhda Chirkova, Sheng Liang, Vassilina Nikoulina
Zero-shot cross-lingual generation assumes finetuning the multilingual pretrained language model (mPLM) on a generation task in one language and then using it to make predictions for this task in other languages.
1 code implementation • 19 Dec 2022 • Ercong Nie, Sheng Liang, Helmut Schmid, Hinrich Schütze
Multilingual Pretrained Language Models (MPLMs) have shown their strong multilinguality in recent empirical cross-lingual transfer studies.
no code implementations • Findings (ACL) 2022 • Sheng Liang, Mengjie Zhao, Hinrich Schütze
Recent research has made impressive progress in large-scale multimodal pre-training.
1 code implementation • 16 Sep 2021 • Sheng Liang, Philipp Dufter, Hinrich Schütze
Multilingual pretrained language models (MPLMs) exhibit multilinguality and are well suited for transfer across languages.
1 code implementation • COLING 2020 • Sheng Liang, Philipp Dufter, Hinrich Sch{\"u}tze
Pretrained language models (PLMs) learn stereotypes held by humans and reflected in text from their training corpora, including gender bias.