no code implementations • 28 Mar 2023 • Nankai Lin, Junheng He, Zhenghang Tang, Dong Zhou, Aimin Yang
The multilingual text representation module uses a multilingual pre-trained language model to represent the text, the language fusion module makes the semantic spaces of different languages tend to be consistent through contrastive learning, and the text debiasing module uses contrastive learning to make the model unable to identify sensitive attributes' information.