1 code implementation • 23 Jan 2024 • Ki Hyun Tae, Hantian Zhang, Jaeyoung Park, Kexin Rong, Steven Euijong Whang
Given a user-specified group fairness measure, Falcon identifies samples from "target groups" (e. g., (attribute=female, label=positive)) that are the most informative for improving fairness.
1 code implementation • 15 Sep 2022 • Hantian Zhang, Ki Hyun Tae, Jaeyoung Park, Xu Chu, Steven Euijong Whang
We then propose an approximate linear programming algorithm and provide theoretical guarantees on how close its result is to the optimal solution in terms of the number of label flips.
no code implementations • 15 Jan 2021 • Steven Euijong Whang, Ki Hyun Tae, Yuji Roh, Geon Heo
Second, responsible AI must be broadly supported, preferably in all steps of machine learning.
2 code implementations • 10 Mar 2020 • Ki Hyun Tae, Steven Euijong Whang
Instead, we contend that one needs to selectively acquire data and propose Slice Tuner, which acquires possibly-different amounts of data per slice such that the model accuracy and fairness on all slices are optimized.
no code implementations • 22 Apr 2019 • Ki Hyun Tae, Yuji Roh, Young Hun Oh, Hyunsu Kim, Steven Euijong Whang
As machine learning is used in sensitive applications, it becomes imperative that the trained model is accurate, fair, and robust to attacks.
no code implementations • 16 Jul 2018 • Yeounoh Chung, Tim Kraska, Neoklis Polyzotis, Ki Hyun Tae, Steven Euijong Whang
As machine learning systems become democratized, it becomes increasingly important to help users easily debug their models.