Search Results for author: Xinpeng Wang

Found 21 papers, 12 papers with code

Think Before Refusal : Triggering Safety Reflection in LLMs to Mitigate False Refusal Behavior

no code implementations22 Mar 2025 Shengyun Si, Xinpeng Wang, Guangyao Zhai, Nassir Navab, Barbara Plank

Recent advancements in large language models (LLMs) have demonstrated that fine-tuning and human alignment can render LLMs harmless.

ClusMFL: A Cluster-Enhanced Framework for Modality-Incomplete Multimodal Federated Learning in Brain Imaging Analysis

no code implementations14 Feb 2025 Xinpeng Wang, Rong Zhou, Han Xie, Xiaoying Tang, Lifang He, Carl Yang

Building on this realistic simulation, we propose ClusMFL, a novel MFL framework that leverages feature clustering for cross-institutional brain imaging analysis under modality incompleteness.

Contrastive Learning Federated Learning +1

Understanding When Tree of Thoughts Succeeds: Larger Models Excel in Generation, Not Discrimination

1 code implementation23 Oct 2024 Qiqi Chen, Xinpeng Wang, Philipp Mondorf, Michael A. Hedderich, Barbara Plank

In this paper, we analyze the roles of the generator and discriminator separately to better understand the conditions when ToT is beneficial.

DAMRO: Dive into the Attention Mechanism of LVLM to Reduce Object Hallucination

no code implementations6 Oct 2024 Xuan Gong, Tianshi Ming, Xinpeng Wang, Zhihua Wei

As we know, both the visual encoder and the Large Language Model (LLM) decoder in LVLMs are Transformer-based, allowing the model to extract visual information and generate text outputs via attention mechanisms.

Attribute Decoder +4

The Potential and Challenges of Evaluating Attitudes, Opinions, and Values in Large Language Models

no code implementations16 Jun 2024 Bolei Ma, Xinpeng Wang, Tiancheng Hu, Anna-Carolina Haensch, Michael A. Hedderich, Barbara Plank, Frauke Kreuter

This paper aims to bridge this gap by providing a comprehensive overview of recent works on the evaluation of AOVs in LLMs.

FinerCut: Finer-grained Interpretable Layer Pruning for Large Language Models

no code implementations28 May 2024 Yang Zhang, Yawei Li, Xinpeng Wang, Qianli Shen, Barbara Plank, Bernd Bischl, Mina Rezaei, Kenji Kawaguchi

Overparametrized transformer networks are the state-of-the-art architecture for Large Language Models (LLMs).

Decoder

Look at the Text: Instruction-Tuned Language Models are More Robust Multiple Choice Selectors than You Think

1 code implementation12 Apr 2024 Xinpeng Wang, Chengzhi Hu, Bolei Ma, Paul Röttger, Barbara Plank

We show that the text answers are more robust to question perturbations than the first token probabilities, when the first token answers mismatch the text answers.

Multiple-choice

ACTOR: Active Learning with Annotator-specific Classification Heads to Embrace Human Label Variation

no code implementations23 Oct 2023 Xinpeng Wang, Barbara Plank

We show that in the active learning setting, a multi-head model performs significantly better than a single-head model in terms of uncertainty estimation.

Active Learning

Large-Scale and Multi-Perspective Opinion Summarization with Diverse Review Subsets

1 code implementation20 Oct 2023 Han Jiang, Rui Wang, Zhihua Wei, Yu Li, Xinpeng Wang

Furthermore, our in-depth analysis verifies that the advanced selection of review subsets and the two-stage training scheme are vital to boosting the summarization performance.

Opinion Summarization

Open-domain Dialogue Generation Grounded with Dynamic Multi-form Knowledge Fusion

no code implementations24 Apr 2022 Feifei Xu, Shanlin Zhou, Xinpeng Wang, Yunpu Ma, Wenkai Zhang, Zhisong Li

To merge these two forms of knowledge into the dialogue effectively, we design a dynamic virtual knowledge selector and a controller that help to enrich and expand knowledge space.

Dialogue Generation Form +2

SceneFormer: Indoor Scene Generation with Transformers

2 code implementations17 Dec 2020 Xinpeng Wang, Chandan Yeshwanth, Matthias Nießner

In contrast, we do not use any appearance information, and implicitly learn object relations using the self-attention mechanism of transformers.

Scene Generation

Controllable Multi-Character Psychology-Oriented Story Generation

1 code implementation11 Oct 2020 Feifei Xu, Xinpeng Wang, Yunpu Ma, Volker Tresp, Yuyi Wang, Shanlin Zhou, Haizhou Du

In our work, we aim to design an emotional line for each character that considers multiple emotions common in psychological theories, with the goal of generating stories with richer emotional changes in the characters.

Sentence Story Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.