no code implementations • 30 Jun 2023 • Takuma Yoneda, Jiading Fang, Peng Li, Huanyu Zhang, Tianchong Jiang, Shengjie Lin, Ben Picker, David Yunis, Hongyuan Mei, Matthew R. Walter
In this paper, we explore a new dimension in which large language models may benefit robotics planning.
no code implementations • 9 Jun 2023 • Hua Wang, Sheng Gao, Huanyu Zhang, Weijie J. Su, Milan Shen
In our paper, we introduce DP-HyPO, a pioneering framework for ``adaptive'' private hyperparameter optimization, aiming to bridge the gap between private and non-private hyperparameter optimization.
no code implementations • 8 Jun 2023 • Ruiquan Huang, Huanyu Zhang, Luca Melis, Milan Shen, Meisam Hajzinia, Jing Yang
This paper studies federated linear contextual bandits under the notion of user-level differential privacy (DP).
no code implementations • 24 Oct 2022 • Shahab Asoodeh, Huanyu Zhang
We investigate the contraction properties of locally differentially private mechanisms.
1 code implementation • 9 Jun 2022 • Hua Wang, Sheng Gao, Huanyu Zhang, Milan Shen, Weijie J. Su
Many modern machine learning algorithms are composed of simple private algorithms; thus, an increasingly important problem is to efficiently compute the overall privacy loss under composition.
no code implementations • 9 Nov 2021 • Jayadev Acharya, Ayush Jain, Gautam Kamath, Ananda Theertha Suresh, Huanyu Zhang
We study the problem of robustly estimating the parameter $p$ of an Erd\H{o}s-R\'enyi random graph on $n$ nodes, where a $\gamma$ fraction of nodes may be adversarially corrupted.
no code implementations • 2 Jun 2021 • Gautam Kamath, Xingtu Liu, Huanyu Zhang
Finally, we prove nearly-matching lower bounds for private stochastic convex optimization with strongly convex losses and mean estimation, showing new separations between pure and concentrated DP.
no code implementations • 21 Apr 2021 • Jayadev Acharya, Ziteng Sun, Huanyu Zhang
We consider both the "centralized setting" and the "distributed setting with information constraints" including communication and local privacy (LDP) constraints.
no code implementations • 1 Mar 2021 • Huanyu Zhang, Ilya Mironov, Meisam Hejazinia
Despite intense interest and considerable effort, the current generation of neural networks suffers a significant loss of accuracy under most practically relevant privacy training regimes.
no code implementations • 10 Feb 2021 • Huanyu Zhang, Ziping Zhao
The problem of joint design of transmit waveforms and receive filters is desirable in many application scenarios of multiple-input multiple-output (MIMO) radar systems.
no code implementations • 14 Apr 2020 • Jayadev Acharya, Ziteng Sun, Huanyu Zhang
The technical component of our paper relates coupling between distributions to the sample complexity of estimation under differential privacy.
no code implementations • 21 Feb 2020 • Sivakanth Gopi, Gautam Kamath, Janardhan Kulkarni, Aleksandar Nikolov, Zhiwei Steven Wu, Huanyu Zhang
Absent privacy constraints, this problem requires $O(\log k)$ samples from $p$, and it was recently shown that the same complexity is achievable under (central) differential privacy.
no code implementations • ICML 2020 • Huanyu Zhang, Gautam Kamath, Janardhan Kulkarni, Zhiwei Steven Wu
We consider the problem of learning Markov Random Fields (including the prototypical example, the Ising model) under the constraint of differential privacy.
no code implementations • 1 Oct 2019 • Di Wang, Lijie Hu, Huanyu Zhang, Marco Gaboardi, Jinhui Xu
In the second part of the paper, we extend our idea to the problem of estimating non-linear regressions and show similar results as in GLMs for both multivariate Gaussian and sub-Gaussian cases.
1 code implementation • ICML 2018 • Jayadev Acharya, Gautam Kamath, Ziteng Sun, Huanyu Zhang
We develop differentially private methods for estimating various distributional properties.
3 code implementations • 13 Feb 2018 • Jayadev Acharya, Ziteng Sun, Huanyu Zhang
All previously known sample optimal algorithms require linear (in $k$) communication from each user in the high privacy regime $(\varepsilon=O(1))$, and run in time that grows as $n\cdot k$, which can be prohibitive for large domain size $k$.
no code implementations • NeurIPS 2018 • Jayadev Acharya, Ziteng Sun, Huanyu Zhang
We propose a general framework to establish lower bounds on the sample complexity of statistical tasks under differential privacy.