Search Results for author: Chengzhi Hu

Found 3 papers, 2 papers with code

Look at the Text: Instruction-Tuned Language Models are More Robust Multiple Choice Selectors than You Think

no code implementations12 Apr 2024 Xinpeng Wang, Chengzhi Hu, Bolei Ma, Paul Röttger, Barbara Plank

We show that the text answers are more robust to question perturbations than the first token probabilities, when the first token answers mismatch the text answers.

Multiple-choice

mPLM-Sim: Better Cross-Lingual Similarity and Transfer in Multilingual Pretrained Language Models

1 code implementation23 May 2023 Peiqin Lin, Chengzhi Hu, Zheyu Zhang, André F. T. Martins, Hinrich Schütze

Recent multilingual pretrained language models (mPLMs) have been shown to encode strong language-specific signals, which are not explicitly provided during pretraining.

Open-Ended Question Answering Zero-Shot Cross-Lingual Transfer

Cannot find the paper you are looking for? You can Submit a new open access paper.