Search Results for author: Sheng Liang

Found 7 papers, 3 papers with code

From Classification to Generation: Insights into Crosslingual Retrieval Augmented ICL

no code implementations11 Nov 2023 Xiaoqian Li, Ercong Nie, Sheng Liang

The remarkable ability of Large Language Models (LLMs) to understand and follow instructions has sometimes been limited by their in-context learning (ICL) performance in low-resource languages.

In-Context Learning Retrieval

Crosslingual Retrieval Augmented In-context Learning for Bangla

no code implementations1 Nov 2023 Xiaoqian Li, Ercong Nie, Sheng Liang

The promise of Large Language Models (LLMs) in Natural Language Processing has often been overshadowed by their limited performance in low-resource languages such as Bangla.

In-Context Learning Retrieval

Empirical study of pretrained multilingual language models for zero-shot cross-lingual generation

no code implementations15 Oct 2023 Nadezhda Chirkova, Sheng Liang, Vassilina Nikoulina

Zero-shot cross-lingual generation assumes finetuning the multilingual pretrained language model (mPLM) on a generation task in one language and then using it to make predictions for this task in other languages.

Language Modelling Pretrained Multilingual Language Models

Cross-Lingual Retrieval Augmented Prompt for Low-Resource Languages

1 code implementation19 Dec 2022 Ercong Nie, Sheng Liang, Helmut Schmid, Hinrich Schütze

Multilingual Pretrained Language Models (MPLMs) have shown their strong multilinguality in recent empirical cross-lingual transfer studies.

Cross-Lingual Transfer Natural Language Inference +3

Locating Language-Specific Information in Contextualized Embeddings

1 code implementation16 Sep 2021 Sheng Liang, Philipp Dufter, Hinrich Schütze

Multilingual pretrained language models (MPLMs) exhibit multilinguality and are well suited for transfer across languages.

Monolingual and Multilingual Reduction of Gender Bias in Contextualized Representations

1 code implementation COLING 2020 Sheng Liang, Philipp Dufter, Hinrich Sch{\"u}tze

Pretrained language models (PLMs) learn stereotypes held by humans and reflected in text from their training corpora, including gender bias.

Language Modelling Sentence

Cannot find the paper you are looking for? You can Submit a new open access paper.