no code implementations • 6 Sep 2022 • Huaming Ling, Chenglong Bao, Xin Liang, Zuoqiang Shi
However, existing methods adopt a static affinity matrix to learn the low-dimensional representations of data points and do not optimize the affinity matrix during the learning process.
no code implementations • 18 Oct 2021 • Tao Sun, Huaming Ling, Zuoqiang Shi, Dongsheng Li, Bao Wang
In this paper, to eliminate the effort for tuning the momentum-related hyperparameter, we propose a new adaptive momentum inspired by the optimal choice of the heavy ball momentum for quadratic optimization.
no code implementations • 1 Jan 2021 • Wenqi Tao, Huaming Ling, Zuoqiang Shi, Bao Wang
Empirically, we show that residual perturbation outperforms the state-of-the-art DP stochastic gradient descent (DPSGD) in both membership privacy protection and maintaining the DL models' utility.