Role-Play Paradox in Large Language Models: Reasoning Performance Gains and Ethical Dilemmas
Role-play in large language models (LLMs) enhances their ability to generate contextually relevant and high-quality responses by simulating diverse cognitive perspectives. However, our study identifies significant risks associated with this technique. First, we demonstrate that autotuning, a method used to auto-select models' roles based on the question, can lead to the generation of harmful outputs, even when the model is tasked with adopting neutral roles. Second, we investigate how different roles affect the likelihood of generating biased or harmful content. Through testing on benchmarks containing stereotypical and harmful questions, we find that role-play consistently amplifies the risk of biased outputs. Our results underscore the need for careful consideration of both role simulation and tuning processes when deploying LLMs in sensitive or high-stakes contexts.
PDF Abstract