no code implementations • 7 Jun 2024 • Samuel Deng, Daniel Hsu, Jingwen Liu
We study the problem of online multi-group learning, a learning model in which an online learner must simultaneously achieve small prediction regret on a large collection of (possibly overlapping) subsequences corresponding to a family of groups.
no code implementations • 1 Feb 2024 • Samuel Deng, Daniel Hsu
The multi-group learning model formalizes the learning scenario in which a single predictor must generalize well on multiple, possibly overlapping subgroups of interest.
no code implementations • 7 Mar 2023 • Samuel Deng, Navid Ardeshir, Daniel Hsu
We consider the problem of distribution-free conformal prediction and the criterion of group conditional validity.
no code implementations • 18 Jan 2022 • Samuel Deng, Yilin Guo, Daniel Hsu, Debmalya Mandal
Prior works on learning linear representations for meta-learning assume that there is a common shared representation across different tasks, and do not consider the additional task-specific observable side information.
no code implementations • 4 Dec 2020 • Bo Cowgill, Fabrizio Dell'Acqua, Samuel Deng, Daniel Hsu, Nakul Verma, Augustin Chaintreau
We find that biased predictions are mostly caused by biased training data.
2 code implementations • 10 Nov 2020 • Nicholas Carlini, Samuel Deng, Sanjam Garg, Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody, Shuang Song, Abhradeep Thakurta, Florian Tramer
A private machine learning algorithm hides as much as possible about its training data while still preserving accuracy.
2 code implementations • NeurIPS 2020 • Debmalya Mandal, Samuel Deng, Suman Jana, Jeannette M. Wing, Daniel Hsu
In this work, we develop classifiers that are fair not only with respect to the training distribution, but also for a class of distributions that are weighted perturbations of the training samples.
no code implementations • NeurIPS 2021 • Samuel Deng, Sanjam Garg, Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody, Abhradeep Thakurta
Some of the stronger poisoning attacks require the full knowledge of the training data.
no code implementations • 31 Oct 2019 • Samuel Deng, Achille Varzi
In the ML fairness literature, there have been few investigations through the viewpoint of philosophy, a lens that encourages the critical evaluation of basic assumptions.