no code implementations • 29 Sep 2021 • Rina Panigrahy, Brendan Juba, Zihao Deng, Xin Wang, Zee Fryer
We propose a modular architecture for lifelong learning of hierarchically structured tasks.
no code implementations • NAACL (WOAH) 2022 • Zee Fryer, Vera Axelrod, Ben Packer, Alex Beutel, Jilin Chen, Kellie Webster
A common approach for testing fairness issues in text-based classifiers is through the use of counterfactuals: does the classifier output change if a sensitive attribute in the input is changed?
no code implementations • 24 Jan 2024 • Miao Zhang, Zee Fryer, Ben Colman, Ali Shahriyari, Gaurav Bharaj
Machine learning model bias can arise from dataset composition: sensitive features correlated to the learning target disturb the model decision rule and lead to performance differences along the features.