no code implementations • ICML 2020 • Sijia Liu, Songtao Lu, Xiangyi Chen, Yao Feng, Kaidi Xu, Abdullah Al-Dujaili, Mingyi Hong, Una-May O'Reilly
In this paper, we study the problem of constrained min-max optimization in a black-box setting, where the desired optimizer cannot access the gradients of the objective function but may query its values.
1 code implementation • 20 Sep 2023 • Xiangyi Chen, Stéphane Lathuilière
We propose FADING, a novel approach to address Face Aging via DIffusion-based editiNG.
2 code implementations • 13 Jun 2022 • Gaoyuan Zhang, Songtao Lu, Yihua Zhang, Xiangyi Chen, Pin-Yu Chen, Quanfu Fan, Lee Martie, Lior Horesh, Mingyi Hong, Sijia Liu
Spurred by that, we propose distributed adversarial training (DAT), a large-batch adversarial training framework implemented over multiple machines.
no code implementations • 30 Oct 2021 • Jian Du, Song Li, Xiangyi Chen, Siheng Chen, Mingyi Hong
The equivalent privacy costs controlled by maintaining the same gradient clipping thresholds and noise powers in each step result in unstable updates and a lower model accuracy when compared to the non-DP counterpart.
no code implementations • 10 Sep 2021 • Xiangyi Chen, Xiaoyun Li, Ping Li
While adaptive gradient methods have been proven effective for training neural nets, the study of adaptive gradient methods in federated learning is scarce.
no code implementations • 7 Sep 2021 • Xiangyi Chen, Belhal Karimi, Weijie Zhao, Ping Li
Adaptive gradient methods including Adam, AdaGrad, and their variants have been very successful for training deep learning models, such as neural networks.
no code implementations • 25 Jun 2021 • Xinwei Zhang, Xiangyi Chen, Mingyi Hong, Zhiwei Steven Wu, JinFeng Yi
Recently, there has been a line of work on incorporating the formal privacy notion of differential privacy with FL.
no code implementations • 1 Jan 2021 • Xiangyi Chen, Belhal Karimi, Weijie Zhao, Ping Li
Specifically, we propose a general algorithmic framework that can convert existing adaptive gradient methods to their decentralized counterparts.
no code implementations • NeurIPS 2020 • Xiangyi Chen, Zhiwei Steven Wu, Mingyi Hong
Deep learning models are increasingly popular in many machine learning applications where the training data may contain sensitive information.
no code implementations • 24 Jun 2020 • Yingxue Zhou, Xiangyi Chen, Mingyi Hong, Zhiwei Steven Wu, Arindam Banerjee
We obtain this rate by providing the first analyses on a collection of private gradient-based methods, including adaptive algorithms DP RMSProp and DP Adam.
1 code implementation • NeurIPS 2019 • Xiangyi Chen, Sijia Liu, Kaidi Xu, Xingguo Li, Xue Lin, Mingyi Hong, David Cox
In this paper, we propose a zeroth-order AdaMM (ZO-AdaMM) algorithm, that generalizes AdaMM to the gradient-free regime.
1 code implementation • 30 Sep 2019 • Sijia Liu, Songtao Lu, Xiangyi Chen, Yao Feng, Kaidi Xu, Abdullah Al-Dujaili, Minyi Hong, Una-May O'Reilly
In this paper, we study the problem of constrained robust (min-max) optimization ina black-box setting, where the desired optimizer cannot access the gradients of the objective function but may query its values.
no code implementations • NeurIPS 2020 • Xiangyi Chen, Tiancong Chen, Haoran Sun, Zhiwei Steven Wu, Mingyi Hong
We show that these algorithms are non-convergent whenever there is some disparity between the expected median and mean over the local gradients.
no code implementations • ICLR 2019 • Sijia Liu, Pin-Yu Chen, Xiangyi Chen, Mingyi Hong
Our study shows that ZO signSGD requires $\sqrt{d}$ times more iterations than signSGD, leading to a convergence rate of $O(\sqrt{d}/\sqrt{T})$ under mild conditions, where $d$ is the number of optimization variables, and $T$ is the number of iterations.
no code implementations • ICLR 2019 • Songtao Lu, Rahul Singh, Xiangyi Chen, Yongxin Chen, Mingyi Hong
By developing new primal-dual optimization tools, we show that, with a proper stepsize choice, the widely used first-order iterative algorithm in training GANs would in fact converge to a stationary solution with a sublinear rate.
no code implementations • ICLR 2019 • Xiangyi Chen, Sijia Liu, Ruoyu Sun, Mingyi Hong
We prove that under our derived conditions, these methods can achieve the convergence rate of order $O(\log{T}/\sqrt{T})$ for nonconvex stochastic optimization.