no code implementations • 24 Mar 2023 • Vahid Partovi Nia, Guojun Zhang, Ivan Kobyzev, Michael R. Metel, Xinlin Li, Ke Sun, Sobhan Hemati, Masoud Asgharian, Linglong Kong, Wulong Liu, Boxing Chen
Deep models are dominating the artificial intelligence (AI) industry since the ImageNet challenge in 2012.
no code implementations • 6 Feb 2023 • Alex Bie, Gautam Kamath, Guojun Zhang
We show that the canonical approach for training differentially private GANs -- updating the discriminator with differentially private stochastic gradient descent (DPSGD) -- can yield significantly improved results after modifications to training.
no code implementations • 27 Nov 2022 • Ehsan Imani, Guojun Zhang, Jun Luo, Pascal Poupart, Yangchen Pan
Recent work reported the label alignment property in a supervised learning setting: the vector of all labels in the dataset is mostly in the span of the top few singular vectors of the data matrix.
no code implementations • 5 Aug 2022 • Dihong Jiang, Guojun Zhang, Mahdi Karami, Xi Chen, Yunfeng Shao, YaoLiang Yu
Similar to other differentially private (DP) learners, the major challenge for DPGM is also how to achieve a subtle balance between utility and privacy.
1 code implementation • 20 Jun 2022 • Mohsin Hasan, Zehao Zhang, Kaiyang Guo, Mahdi Karami, Guojun Zhang, Xi Chen, Pascal Poupart
In contrast, our method performs the aggregation on the predictive posteriors, which are typically easier to approximate owing to the low-dimensionality of the output space.
no code implementations • 20 Jun 2022 • Artur Back de Luca, Guojun Zhang, Xi Chen, YaoLiang Yu
Federated Learning (FL) is a prominent framework that enables training a centralized model while securing user privacy by fusing local, decentralized models.
no code implementations • 13 Jun 2022 • Haolin Yu, Kaiyang Guo, Mahdi Karami, Xi Chen, Guojun Zhang, Pascal Poupart
We present Federated Bayesian Neural Regression (FedBNR), an algorithm that learns a scalable stand-alone global federated GP that respects clients' privacy.
no code implementations • ICLR 2022 • David Acuna, Marc T Law, Guojun Zhang, Sanja Fidler
Defining optimal solutions in domain-adversarial training as a local Nash equilibrium, we show that gradient descent in domain-adversarial training can violate the asymptotic convergence guarantees of the optimizer, oftentimes hindering the transfer performance.
no code implementations • 3 Feb 2022 • Guojun Zhang, Saber Malekmohammadi, Xi Chen, YaoLiang Yu
With the increasingly broad deployment of federated learning (FL) systems in the real world, it is critical but challenging to ensure fairness in FL, i. e. reasonably satisfactory performances for each of the numerous diverse clients.
no code implementations • 29 Sep 2021 • Guojun Zhang, Yiwei Lu, Sun Sun, Hongyu Guo, YaoLiang Yu
Self-supervised contrastive learning is an emerging field due to its power in providing good data representations.
1 code implementation • 21 Jun 2021 • David Acuna, Guojun Zhang, Marc T. Law, Sanja Fidler
Unsupervised domain adaptation is used in many machine learning applications where, during training, a model has access to unlabeled data in the target domain, and a related labeled dataset.
2 code implementations • NeurIPS 2021 • Guojun Zhang, Han Zhao, YaoLiang Yu, Pascal Poupart
We then prove that our transferability can be estimated with enough samples and give a new upper bound for the target error based on our transferability.
no code implementations • 1 Jan 2021 • David Acuna, Guojun Zhang, Marc T Law, Sanja Fidler
We provide empirical results for several f-divergences and show that some, not considered previously in domain-adversarial learning, achieve state-of-the-art results in practice.
1 code implementation • 25 Jun 2020 • Guojun Zhang, Kaiwen Wu, Pascal Poupart, Yao-Liang Yu
We prove their local convergence at strict local minimax points, which are surrogates of global solutions.
1 code implementation • 20 Jun 2020 • Zeou Hu, Kiarash Shaloudegi, Guojun Zhang, Yao-Liang Yu
Federated learning has emerged as a promising, massively distributed way to train a joint deep model over large amounts of edge devices while keeping private user data strictly on device.
no code implementations • 27 Feb 2020 • Guojun Zhang, Pascal Poupart, Yao-Liang Yu
Convergence to a saddle point for convex-concave functions has been studied for decades, while recent years has seen a surge of interest in non-convex (zero-sum) smooth games, motivated by their recent wide applications.
no code implementations • 10 Nov 2019 • Azkar Saeed Ahmad, Yongcheng Liang, Mingdong Dong, Xuefeng Zhou, Leiming Fang, Yuanhua Xia, Jianhong Dai, Xiaozhi Yan, Xiaohui Yu, Guojun Zhang, Yusheng Zhao, Shanmin Wang
Layered transition-metal compounds with controllable magnetic behaviors provide many fascinating opportunities for the fabrication of high-performance magneto-electric and spintronic devices.
Materials Science
1 code implementation • ICLR 2020 • Guojun Zhang, Yao-Liang Yu
Min-max formulations have attracted great attention in the ML community due to the rise of deep generative models and adversarial methods, while understanding the dynamics of gradient algorithms for solving such formulations has remained a grand challenge.
1 code implementation • 8 Jul 2019 • Guojun Zhang, Pascal Poupart, George Trimponias
In the case of mixtures of Bernoullis, we find that there exist one-cluster regions that are stable for GD and therefore trap GD, but those regions are unstable for EM, allowing EM to escape.