no code implementations • 22 Mar 2025 • Shengyun Si, Xinpeng Wang, Guangyao Zhai, Nassir Navab, Barbara Plank
Recent advancements in large language models (LLMs) have demonstrated that fine-tuning and human alignment can render LLMs harmless.
no code implementations • 14 Feb 2025 • Xinpeng Wang, Rong Zhou, Han Xie, Xiaoying Tang, Lifang He, Carl Yang
Building on this realistic simulation, we propose ClusMFL, a novel MFL framework that leverages feature clustering for cross-institutional brain imaging analysis under modality incompleteness.
1 code implementation • 17 Dec 2024 • Bolei Ma, Berk Yoztyurk, Anna-Carolina Haensch, Xinpeng Wang, Markus Herklotz, Frauke Kreuter, Barbara Plank, Matthias Assenmacher
In recent research, large language models (LLMs) have been increasingly used to investigate public opinions.
1 code implementation • 23 Oct 2024 • Qiqi Chen, Xinpeng Wang, Philipp Mondorf, Michael A. Hedderich, Barbara Plank
In this paper, we analyze the roles of the generator and discriminator separately to better understand the conditions when ToT is beneficial.
1 code implementation • 15 Oct 2024 • Xinpeng Wang, Yongxin Guo, Xiaoying Tang
Domain Generalization (DG) aims to train models that can effectively generalize to unseen domains.
no code implementations • 6 Oct 2024 • Xuan Gong, Tianshi Ming, Xinpeng Wang, Zhihua Wei
As we know, both the visual encoder and the Large Language Model (LLM) decoder in LVLMs are Transformer-based, allowing the model to extract visual information and generate text outputs via attention mechanisms.
no code implementations • 4 Oct 2024 • Xinpeng Wang, Chengzhi Hu, Paul Röttger, Barbara Plank
We also show that our approach can be used for fine-grained calibration of model safety.
1 code implementation • 25 Jun 2024 • Beiduo Chen, Xinpeng Wang, Siyao Peng, Robert Litschko, Anna Korhonen, Barbara Plank
This study proposes to exploit LLMs to approximate HJDs using a small number of expert labels and explanations.
no code implementations • 16 Jun 2024 • Bolei Ma, Xinpeng Wang, Tiancheng Hu, Anna-Carolina Haensch, Michael A. Hedderich, Barbara Plank, Frauke Kreuter
This paper aims to bridge this gap by providing a comprehensive overview of recent works on the evaluation of AOVs in LLMs.
no code implementations • 28 May 2024 • Yang Zhang, Yawei Li, Xinpeng Wang, Qianli Shen, Barbara Plank, Bernd Bischl, Mina Rezaei, Kenji Kawaguchi
Overparametrized transformer networks are the state-of-the-art architecture for Large Language Models (LLMs).
1 code implementation • 12 Apr 2024 • Xinpeng Wang, Chengzhi Hu, Bolei Ma, Paul Röttger, Barbara Plank
We show that the text answers are more robust to question perturbations than the first token probabilities, when the first token answers mismatch the text answers.
no code implementations • 7 Mar 2024 • Xinpeng Wang, Shitong Duan, Xiaoyuan Yi, Jing Yao, Shanlin Zhou, Zhihua Wei, Peng Zhang, Dongkuan Xu, Maosong Sun, Xing Xie
Big models have achieved revolutionary breakthroughs in the field of AI, but they might also pose potential concerns.
1 code implementation • 22 Feb 2024 • Xinpeng Wang, Bolei Ma, Chengzhi Hu, Leon Weber-Genzel, Paul Röttger, Frauke Kreuter, Dirk Hovy, Barbara Plank
The open-ended nature of language generation makes the evaluation of autoregressive large language models (LLMs) challenging.
1 code implementation • 13 Dec 2023 • Xinpeng Wang, Xiaoyuan Yi, Han Jiang, Shanlin Zhou, Zhihua Wei, Xing Xie
Warning: this paper includes model outputs showing offensive content.
no code implementations • 23 Oct 2023 • Xinpeng Wang, Barbara Plank
We show that in the active learning setting, a multi-head model performs significantly better than a single-head model in terms of uncertainty estimation.
1 code implementation • 20 Oct 2023 • Han Jiang, Rui Wang, Zhihua Wei, Yu Li, Xinpeng Wang
Furthermore, our in-depth analysis verifies that the advanced selection of review subsets and the two-stage training scheme are vital to boosting the summarization performance.
1 code implementation • 24 May 2023 • Xinpeng Wang, Leonie Weissweiler, Hinrich Schütze, Barbara Plank
To the best of our knowledge, this is the first work comprehensively evaluating distillation objectives in both settings.
1 code implementation • COLING 2022 • Xinpeng Wang, Han Jiang, Zhihua Wei, Shanlin Zhou
Story generation has emerged as an interesting yet challenging NLP task in recent years.
no code implementations • 24 Apr 2022 • Feifei Xu, Shanlin Zhou, Xinpeng Wang, Yunpu Ma, Wenkai Zhang, Zhisong Li
To merge these two forms of knowledge into the dialogue effectively, we design a dynamic virtual knowledge selector and a controller that help to enrich and expand knowledge space.
2 code implementations • 17 Dec 2020 • Xinpeng Wang, Chandan Yeshwanth, Matthias Nießner
In contrast, we do not use any appearance information, and implicitly learn object relations using the self-attention mechanism of transformers.
1 code implementation • 11 Oct 2020 • Feifei Xu, Xinpeng Wang, Yunpu Ma, Volker Tresp, Yuyi Wang, Shanlin Zhou, Haizhou Du
In our work, we aim to design an emotional line for each character that considers multiple emotions common in psychological theories, with the goal of generating stories with richer emotional changes in the characters.