2 code implementations • 18 May 2021 • Kyra Yee, Uthaipon Tantipongpipat, Shubhanshu Mishra
However, we demonstrate that formalized fairness metrics and quantitative analysis on their own are insufficient for capturing the risk of representational harm in automatic cropping.
1 code implementation • NeurIPS 2018 • Samira Samadi, Uthaipon Tantipongpipat, Jamie Morgenstern, Mohit Singh, Santosh Vempala
This motivates our study of dimensionality reduction techniques which maintain similar fidelity for A and B.
1 code implementation • 6 Dec 2019 • Uthaipon Tantipongpipat, Chris Waites, Digvijay Boob, Amaresh Ankit Siva, Rachel Cummings
We introduce the DP-auto-GAN framework for synthetic data generation, which combines the low dimensional representation of autoencoders with the flexibility of Generative Adversarial Networks (GANs).
1 code implementation • 25 Sep 2019 • Uthaipon Tantipongpipat, Chris Waites, Digvijay Boob, Amaresh Siva, Rachel Cummings
In this work we introduce the DP-auto-GAN framework for synthetic data generation, which combines the low dimensional representation of autoencoders with the flexibility of GANs.
2 code implementations • NeurIPS 2019 • Uthaipon Tantipongpipat, Samira Samadi, Mohit Singh, Jamie Morgenstern, Santosh Vempala
Our main result is an exact polynomial-time algorithm for the two-criterion dimensionality reduction problem when the two criteria are increasing concave functions.
no code implementations • 16 Apr 2020 • Vivek Madan, Aleksandar Nikolov, Mohit Singh, Uthaipon Tantipongpipat
Our main result is a new approximation algorithm with an approximation guarantee that depends only on the dimension $d$ of the vectors and not on the size $k$ of the output set.
no code implementations • 19 Jun 2020 • Uthaipon Tantipongpipat
In this work, we study the $\lambda$-regularized $A$-optimal design problem and introduce the $\lambda$-regularized proportional volume sampling algorithm, generalized from [Nikolov, Singh, and Tantipongpipat, 2019], for this problem with the approximation guarantee that extends upon the previous work.
no code implementations • 1 Jan 2021 • Zhiqi Bu, Sivakanth Gopi, Janardhan Kulkarni, Yin Tat Lee, Uthaipon Tantipongpipat
Differentially Private-SGD (DP-SGD) of Abadi et al. (2016) and its variations are the only known algorithms for private training of large scale neural networks.
no code implementations • NeurIPS 2021 • Zhiqi Bu, Sivakanth Gopi, Janardhan Kulkarni, Yin Tat Lee, Judy Hanwen Shen, Uthaipon Tantipongpipat
Unlike previous attempts to make DP-SGD faster which work only on a subset of network architectures or use compiler techniques, we propose an algorithmic solution which works for any network in a black-box manner which is the main contribution of this paper.
no code implementations • 3 Feb 2022 • Tomo Lazovich, Luca Belli, Aaron Gonzales, Amanda Bower, Uthaipon Tantipongpipat, Kristian Lum, Ferenc Huszar, Rumman Chowdhury
We show that we can use these metrics to identify content suggestion algorithms that contribute more strongly to skewed outcomes between users.