Search Results for author: Tingjiang Wei

Found 2 papers, 0 papers with code

FairMonitor: A Four-Stage Automatic Framework for Detecting Stereotypes and Biases in Large Language Models

no code implementations21 Aug 2023 Yanhong Bai, Jiabao Zhao, Jinxin Shi, Tingjiang Wei, Xingjiao Wu, Liang He

Detecting stereotypes and biases in Large Language Models (LLMs) can enhance fairness and reduce adverse impacts on individuals or groups when these LLMs are applied.

Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.