no code implementations • ACL 2022 • Chun Sik Chan, Huanqi Kong, Guanqing Liang
Interpretation methods to reveal the internal reasoning processes behind machine learning models have attracted increasing attention in recent years.
no code implementations • ACL 2021 • Guanqing Liang, Cane Wing-Ki Leung
Specifically, we analyzed five benchmarking datasets for Chinese NER, and observed the following two types of data bias that can compromise model generalization ability.