no code implementations • 11 Jul 2024 • Haolin Yu, Guojun Zhang, Pascal Poupart
In federated learning (FL), the common paradigm that FedAvg proposes and most algorithms follow is that clients train local models with their private data, and the model parameters are shared for central aggregation, mostly averaging.
no code implementations • 5 Jun 2024 • Yihan Wang, Yiwei Lu, Guojun Zhang, Franziska Boenisch, Adam Dziedzic, YaoLiang Yu, Xiao-Shan Gao
Machine unlearning provides viable solutions to revoke the effect of certain training data on pre-trained model parameters.
no code implementations • 23 Feb 2024 • Nader Asadi, Mahdi Beitollahi, Yasser Khalil, Yinchuan Li, Guojun Zhang, Xi Chen
Parameter-efficient fine-tuning stands as the standard for efficiently fine-tuning large language and vision models on downstream tasks.
no code implementations • 15 Feb 2024 • Yiwei Lu, Guojun Zhang, Sun Sun, Hongyu Guo, YaoLiang Yu
In self-supervised contrastive learning, a widely-adopted objective function is InfoNCE, which uses the heuristic cosine similarity for the representation comparison, and is closely related to maximizing the Kullback-Leibler (KL)-based mutual information.
1 code implementation • 3 Feb 2024 • Yifei He, Shiji Zhou, Guojun Zhang, Hyokun Yun, Yi Xu, Belinda Zeng, Trishul Chilimbi, Han Zhao
To overcome this limitation, we propose Multi-Task Learning with Excess Risks (ExcessMTL), an excess risk-based task balancing method that updates the task weights by their distances to convergence instead.
no code implementations • 2 Feb 2024 • Yasser H. Khalil, Amir H. Estiri, Mahdi Beitollahi, Nader Asadi, Sobhan Hemati, Xu Li, Guojun Zhang, Xi Chen
In the realm of real-world devices, centralized servers in Federated Learning (FL) present challenges including communication bottlenecks and susceptibility to a single point of failure.
no code implementations • 2 Feb 2024 • Mahdi Beitollahi, Alex Bie, Sobhan Hemati, Leo Maxime Brunswic, Xu Li, Xi Chen, Guojun Zhang
This paper introduces FedPFT (Federated Learning with Parametric Feature Transfer), a methodology that harnesses the transferability of foundation models to enhance both accuracy and communication efficiency in one-shot FL.
2 code implementations • 15 Dec 2023 • Mohsin Hasan, Guojun Zhang, Kaiyang Guo, Xi Chen, Pascal Poupart
To improve scalability for larger models, one common Bayesian approach is to approximate the global predictive posterior by multiplying local predictive posteriors.
no code implementations • 8 Dec 2023 • Sobhan Hemati, Mahdi Beitollahi, Amir Hossein Estiri, Bassel Al Omari, Xi Chen, Guojun Zhang
The VRM reduces the estimation error in ERM by replacing the point-wise kernel estimates with a more precise estimation of true data distribution that reduces the gap between data points \textbf{within each domain}.
1 code implementation • 7 Nov 2023 • Ahmad Rashid, Serena Hacker, Guojun Zhang, Agustinus Kristiadi, Pascal Poupart
For instance, ReLU networks - a popular class of neural network architectures - have been shown to almost always yield high confidence predictions when the test data are far away from the training set, even when they are trained with OOD data.
1 code implementation • ICCV 2023 • Sobhan Hemati, Guojun Zhang, Amir Estiri, Xi Chen
We validate the OOD generalization ability of proposed methods in different scenarios, including transferability, severe correlation shift, label shift and diversity shift.
1 code implementation • 18 Aug 2023 • Guojun Zhang, Mahdi Beitollahi, Alex Bie, Xi Chen
In this work, we reveal the profound connection between layer normalization and the label shift problem in federated learning.
no code implementations • 24 Mar 2023 • Vahid Partovi Nia, Guojun Zhang, Ivan Kobyzev, Michael R. Metel, Xinlin Li, Ke Sun, Sobhan Hemati, Masoud Asgharian, Linglong Kong, Wulong Liu, Boxing Chen
Deep models are dominating the artificial intelligence (AI) industry since the ImageNet challenge in 2012.
1 code implementation • 6 Feb 2023 • Alex Bie, Gautam Kamath, Guojun Zhang
We show that the canonical approach for training differentially private GANs -- updating the discriminator with differentially private stochastic gradient descent (DPSGD) -- can yield significantly improved results after modifications to training.
1 code implementation • 27 Nov 2022 • Ehsan Imani, Guojun Zhang, Runjia Li, Jun Luo, Pascal Poupart, Philip H. S. Torr, Yangchen Pan
Recent work has highlighted the label alignment property (LAP) in supervised learning, where the vector of all labels in the dataset is mostly in the span of the top few singular vectors of the data matrix.
no code implementations • 5 Aug 2022 • Dihong Jiang, Guojun Zhang, Mahdi Karami, Xi Chen, Yunfeng Shao, YaoLiang Yu
Similar to other differentially private (DP) learners, the major challenge for DPGM is also how to achieve a subtle balance between utility and privacy.
1 code implementation • 20 Jun 2022 • Mohsin Hasan, Zehao Zhang, Kaiyang Guo, Mahdi Karami, Guojun Zhang, Xi Chen, Pascal Poupart
In contrast, our method performs the aggregation on the predictive posteriors, which are typically easier to approximate owing to the low-dimensionality of the output space.
1 code implementation • 20 Jun 2022 • Artur Back de Luca, Guojun Zhang, Xi Chen, YaoLiang Yu
Federated Learning (FL) is a prominent framework that enables training a centralized model while securing user privacy by fusing local, decentralized models.
no code implementations • 13 Jun 2022 • Haolin Yu, Kaiyang Guo, Mahdi Karami, Xi Chen, Guojun Zhang, Pascal Poupart
We present Federated Bayesian Neural Regression (FedBNR), an algorithm that learns a scalable stand-alone global federated GP that respects clients' privacy.
no code implementations • ICLR 2022 • David Acuna, Marc T Law, Guojun Zhang, Sanja Fidler
Defining optimal solutions in domain-adversarial training as a local Nash equilibrium, we show that gradient descent in domain-adversarial training can violate the asymptotic convergence guarantees of the optimizer, oftentimes hindering the transfer performance.
1 code implementation • 3 Feb 2022 • Guojun Zhang, Saber Malekmohammadi, Xi Chen, YaoLiang Yu
With the increasingly broad deployment of federated learning (FL) systems in the real world, it is critical but challenging to ensure fairness in FL, i. e. reasonably satisfactory performances for each of the numerous diverse clients.
no code implementations • 29 Sep 2021 • Guojun Zhang, Yiwei Lu, Sun Sun, Hongyu Guo, YaoLiang Yu
Self-supervised contrastive learning is an emerging field due to its power in providing good data representations.
1 code implementation • 21 Jun 2021 • David Acuna, Guojun Zhang, Marc T. Law, Sanja Fidler
Unsupervised domain adaptation is used in many machine learning applications where, during training, a model has access to unlabeled data in the target domain, and a related labeled dataset.
2 code implementations • NeurIPS 2021 • Guojun Zhang, Han Zhao, YaoLiang Yu, Pascal Poupart
We then prove that our transferability can be estimated with enough samples and give a new upper bound for the target error based on our transferability.
no code implementations • 1 Jan 2021 • David Acuna, Guojun Zhang, Marc T Law, Sanja Fidler
We provide empirical results for several f-divergences and show that some, not considered previously in domain-adversarial learning, achieve state-of-the-art results in practice.
1 code implementation • 25 Jun 2020 • Guojun Zhang, Kaiwen Wu, Pascal Poupart, Yao-Liang Yu
We prove their local convergence at strict local minimax points, which are surrogates of global solutions.
2 code implementations • 20 Jun 2020 • Zeou Hu, Kiarash Shaloudegi, Guojun Zhang, Yao-Liang Yu
Federated learning has emerged as a promising, massively distributed way to train a joint deep model over large amounts of edge devices while keeping private user data strictly on device.
no code implementations • 27 Feb 2020 • Guojun Zhang, Pascal Poupart, Yao-Liang Yu
Convergence to a saddle point for convex-concave functions has been studied for decades, while recent years has seen a surge of interest in non-convex (zero-sum) smooth games, motivated by their recent wide applications.
no code implementations • 10 Nov 2019 • Azkar Saeed Ahmad, Yongcheng Liang, Mingdong Dong, Xuefeng Zhou, Leiming Fang, Yuanhua Xia, Jianhong Dai, Xiaozhi Yan, Xiaohui Yu, Guojun Zhang, Yusheng Zhao, Shanmin Wang
Layered transition-metal compounds with controllable magnetic behaviors provide many fascinating opportunities for the fabrication of high-performance magneto-electric and spintronic devices.
Materials Science
1 code implementation • ICLR 2020 • Guojun Zhang, Yao-Liang Yu
Min-max formulations have attracted great attention in the ML community due to the rise of deep generative models and adversarial methods, while understanding the dynamics of gradient algorithms for solving such formulations has remained a grand challenge.
1 code implementation • 8 Jul 2019 • Guojun Zhang, Pascal Poupart, George Trimponias
In the case of mixtures of Bernoullis, we find that there exist one-cluster regions that are stable for GD and therefore trap GD, but those regions are unstable for EM, allowing EM to escape.