no code implementations • 1 May 2023 • Yatong Chen, Wei Tang, Chien-Ju Ho, Yang Liu
Specifically, we develop a {\em reparameterization} framework that reparametrizes the performative prediction objective as a function of the induced data distribution.
1 code implementation • 21 Jan 2023 • Zeyu Tang, Yatong Chen, Yang Liu, Kun Zhang
The pursuit of long-term fairness involves the interplay between decision-making and the underlying data generating process.
no code implementations • 15 Jun 2022 • Jimmy Wu, Yatong Chen, Yang Liu
We study the problem of classifier derandomization in machine learning: given a stochastic binary classifier $f: X \to [0, 1]$, sample a deterministic classifier $\hat{f}: X \to \{0, 1\}$ that approximates the output of $f$ in aggregate over any data distribution.
1 code implementation • 31 May 2022 • Yatong Chen, Reilly Raab, Jialu Wang, Yang Liu
Given an algorithmic predictor that is "fair" on some source distribution, will it still be fair on an unknown target distribution that differs from the source within some bound?
1 code implementation • 13 Jul 2021 • Yatong Chen, Zeyu Tang, Kun Zhang, Yang Liu
We provide both upper bounds for the performance gap due to the induced domain shift, as well as lower bounds for the trade-offs that a classifier has to suffer on either the source training distribution or the induced target distribution.
no code implementations • 31 Oct 2020 • Yatong Chen, Jialu Wang, Yang Liu
Machine learning systems are often used in settings where individuals adapt their features to obtain a desired outcome.