Persona is a Double-edged Sword: Mitigating the Negative Impact of Role-playing Prompts in Zero-shot Reasoning Tasks

16 Aug 2024  ·  Junseok Kim, Nakyeong Yang, Kyomin Jung ·

Recent studies demonstrate that prompting a role-playing persona to an LLM improves reasoning capability. However, assigning an adequate persona is difficult since LLMs are extremely sensitive to assigned prompts; thus, inaccurately defined personas sometimes hinder LLMs and degrade their reasoning capabilities. In this paper, we first investigate the potential negative impact of injecting persona into language models. Furthermore, we propose a novel framework, Jekyll \& Hyde, which ensembles the outcomes of both role-playing and neutral prompts to enhance the robustness of reasoning ability. Specifically, Jekyll \& Hyde predicts an appropriate persona using an LLM when defining the role-playing prompt. Then, Jekyll \& Hyde collects two potential solutions from role-playing and neutral prompts and selects a better solution using the LLM evaluator. The experimental analysis demonstrates that role-playing prompts sometimes distract LLMs, degrading their reasoning abilities in 7 out of 12 datasets in llama3. Meanwhile, Jekyll \& Hyde improve reasoning capabilities by selecting better choices among the potential solutions on twelve widely-used natural language reasoning datasets. In addition, we reveal that assigning LLM-generated personas obtains more stable results than handcrafted personas.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods