no code implementations • 12 Apr 2024 • Xinpeng Wang, Chengzhi Hu, Bolei Ma, Paul Röttger, Barbara Plank
We show that the text answers are more robust to question perturbations than the first token probabilities, when the first token answers mismatch the text answers.
no code implementations • 7 Mar 2024 • Xinpeng Wang, Shitong Duan, Xiaoyuan Yi, Jing Yao, Shanlin Zhou, Zhihua Wei, Peng Zhang, Dongkuan Xu, Maosong Sun, Xing Xie
Big models have achieved revolutionary breakthroughs in the field of AI, but they might also pose potential concerns.
1 code implementation • 22 Feb 2024 • Xinpeng Wang, Bolei Ma, Chengzhi Hu, Leon Weber-Genzel, Paul Röttger, Frauke Kreuter, Dirk Hovy, Barbara Plank
The open-ended nature of language generation makes the evaluation of autoregressive large language models (LLMs) challenging.
1 code implementation • 13 Dec 2023 • Xinpeng Wang, Xiaoyuan Yi, Han Jiang, Shanlin Zhou, Zhihua Wei, Xing Xie
Warning: this paper includes model outputs showing offensive content.
no code implementations • 23 Oct 2023 • Xinpeng Wang, Barbara Plank
We show that in the active learning setting, a multi-head model performs significantly better than a single-head model in terms of uncertainty estimation.
1 code implementation • 20 Oct 2023 • Han Jiang, Rui Wang, Zhihua Wei, Yu Li, Xinpeng Wang
Furthermore, our in-depth analysis verifies that the advanced selection of review subsets and the two-stage training scheme are vital to boosting the summarization performance.
1 code implementation • 24 May 2023 • Xinpeng Wang, Leonie Weissweiler, Hinrich Schütze, Barbara Plank
To the best of our knowledge, this is the first work comprehensively evaluating distillation objectives in both settings.
1 code implementation • COLING 2022 • Xinpeng Wang, Han Jiang, Zhihua Wei, Shanlin Zhou
Story generation has emerged as an interesting yet challenging NLP task in recent years.
no code implementations • 24 Apr 2022 • Feifei Xu, Shanlin Zhou, Xinpeng Wang, Yunpu Ma, Wenkai Zhang, Zhisong Li
To merge these two forms of knowledge into the dialogue effectively, we design a dynamic virtual knowledge selector and a controller that help to enrich and expand knowledge space.
2 code implementations • 17 Dec 2020 • Xinpeng Wang, Chandan Yeshwanth, Matthias Nießner
In contrast, we do not use any appearance information, and implicitly learn object relations using the self-attention mechanism of transformers.
1 code implementation • 11 Oct 2020 • Feifei Xu, Xinpeng Wang, Yunpu Ma, Volker Tresp, Yuyi Wang, Shanlin Zhou, Haizhou Du
In our work, we aim to design an emotional line for each character that considers multiple emotions common in psychological theories, with the goal of generating stories with richer emotional changes in the characters.