1 code implementation • 13 Oct 2022 • Ozgur Guldogan, Yuchen Zeng, Jy-yong Sohn, Ramtin Pedarsani, Kangwook Lee
In order to promote long-term fairness, we propose a new fairness notion called Equal Improvability (EI), which equalizes the potential acceptance rate of the rejected samples across different groups assuming a bounded level of effort will be spent by each rejected sample.
1 code implementation • 13 Oct 2022 • Yuchen Zeng, Kristjan Greenewald, Kangwook Lee, Justin Solomon, Mikhail Yurochkin
Traditional machine learning models focus on achieving good performance on the overall training distribution, but they often underperform on minority groups.
1 code implementation • 14 Jun 2022 • Tuan Dinh, Yuchen Zeng, Ruisu Zhang, Ziqian Lin, Michael Gira, Shashank Rajput, Jy-yong Sohn, Dimitris Papailiopoulos, Kangwook Lee
LIFT does not make any changes to the model architecture or loss function, and it solely relies on the natural language interface, enabling "no-code machine learning with LMs."
2 code implementations • 29 Oct 2021 • Yuchen Zeng, Hongxu Chen, Kangwook Lee
We then theoretically and empirically show that the performance tradeoff of FedAvg-based fair learning algorithms is strictly worse than that of a fair classifier trained on centralized data.
no code implementations • NeurIPS 2019 • Miaoyan Wang, Yuchen Zeng
We consider the problem of identifying multiway block structure from a large noisy tensor.