Most existing dialogue systems fail to respond properly to potentially unsafe user utterances by either ignoring or passively agreeing with them.
To address this issue, we introduce ProsocialDialog, the first large-scale multi-turn dialogue dataset to teach conversational agents to respond to problematic content following social norms. Covering diverse unethical, problematic, biased, and toxic situations, ProsocialDialog contains responses that encourage prosocial behavior, grounded in commonsense social rules (i.e., rules-of-thumb, RoTs).
ProsocialDialog consists of 58K dialogues between a speaker showing potentially unsafe behavior and a speaker giving constructive feedback for more socially acceptable behavior. Specifically, it contains a rich suite of:
Paper | Code | Results | Date | Stars |
---|