Search Results for author: Guojun Zhang

Found 29 papers, 14 papers with code

Comparing EM with GD in Mixture Models of Two Components

1 code implementation8 Jul 2019 Guojun Zhang, Pascal Poupart, George Trimponias

In the case of mixtures of Bernoullis, we find that there exist one-cluster regions that are stable for GD and therefore trap GD, but those regions are unstable for EM, allowing EM to escape.

Vocal Bursts Valence Prediction

Convergence of Gradient Methods on Bilinear Zero-Sum Games

1 code implementation ICLR 2020 Guojun Zhang, Yao-Liang Yu

Min-max formulations have attracted great attention in the ML community due to the rise of deep generative models and adversarial methods, while understanding the dynamics of gradient algorithms for solving such formulations has remained a grand challenge.

Pressure-driven switching of magnetism in layered CrCl3

no code implementations10 Nov 2019 Azkar Saeed Ahmad, Yongcheng Liang, Mingdong Dong, Xuefeng Zhou, Leiming Fang, Yuanhua Xia, Jianhong Dai, Xiaozhi Yan, Xiaohui Yu, Guojun Zhang, Yusheng Zhao, Shanmin Wang

Layered transition-metal compounds with controllable magnetic behaviors provide many fascinating opportunities for the fabrication of high-performance magneto-electric and spintronic devices.

Materials Science

Optimality and Stability in Non-Convex Smooth Games

no code implementations27 Feb 2020 Guojun Zhang, Pascal Poupart, Yao-Liang Yu

Convergence to a saddle point for convex-concave functions has been studied for decades, while recent years has seen a surge of interest in non-convex (zero-sum) smooth games, motivated by their recent wide applications.

Federated Learning Meets Multi-objective Optimization

1 code implementation20 Jun 2020 Zeou Hu, Kiarash Shaloudegi, Guojun Zhang, Yao-Liang Yu

Federated learning has emerged as a promising, massively distributed way to train a joint deep model over large amounts of edge devices while keeping private user data strictly on device.

Fairness Federated Learning

Newton-type Methods for Minimax Optimization

1 code implementation25 Jun 2020 Guojun Zhang, Kaiwen Wu, Pascal Poupart, Yao-Liang Yu

We prove their local convergence at strict local minimax points, which are surrogates of global solutions.

Reinforcement Learning (RL) Vocal Bursts Type Prediction

f-Domain-Adversarial Learning: Theory and Algorithms for Unsupervised Domain Adaptation with Neural Networks

no code implementations1 Jan 2021 David Acuna, Guojun Zhang, Marc T Law, Sanja Fidler

We provide empirical results for several f-divergences and show that some, not considered previously in domain-adversarial learning, achieve state-of-the-art results in practice.

Generalization Bounds Learning Theory +1

Quantifying and Improving Transferability in Domain Generalization

2 code implementations NeurIPS 2021 Guojun Zhang, Han Zhao, YaoLiang Yu, Pascal Poupart

We then prove that our transferability can be estimated with enough samples and give a new upper bound for the target error based on our transferability.

Domain Generalization Out-of-Distribution Generalization

f-Domain-Adversarial Learning: Theory and Algorithms

1 code implementation21 Jun 2021 David Acuna, Guojun Zhang, Marc T. Law, Sanja Fidler

Unsupervised domain adaptation is used in many machine learning applications where, during training, a model has access to unlabeled data in the target domain, and a related labeled dataset.

Learning Theory Unsupervised Domain Adaptation

$f$-Mutual Information Contrastive Learning

no code implementations29 Sep 2021 Guojun Zhang, Yiwei Lu, Sun Sun, Hongyu Guo, YaoLiang Yu

Self-supervised contrastive learning is an emerging field due to its power in providing good data representations.

Contrastive Learning

Proportional Fairness in Federated Learning

1 code implementation3 Feb 2022 Guojun Zhang, Saber Malekmohammadi, Xi Chen, YaoLiang Yu

With the increasingly broad deployment of federated learning (FL) systems in the real world, it is critical but challenging to ensure fairness in FL, i. e. reasonably satisfactory performances for each of the numerous diverse clients.

Fairness Federated Learning

Domain Adversarial Training: A Game Perspective

no code implementations ICLR 2022 David Acuna, Marc T Law, Guojun Zhang, Sanja Fidler

Defining optimal solutions in domain-adversarial training as a local Nash equilibrium, we show that gradient descent in domain-adversarial training can violate the asymptotic convergence guarantees of the optimizer, oftentimes hindering the transfer performance.

Domain Adaptation

Federated Bayesian Neural Regression: A Scalable Global Federated Gaussian Process

no code implementations13 Jun 2022 Haolin Yu, Kaiyang Guo, Mahdi Karami, Xi Chen, Guojun Zhang, Pascal Poupart

We present Federated Bayesian Neural Regression (FedBNR), an algorithm that learns a scalable stand-alone global federated GP that respects clients' privacy.

Federated Learning Knowledge Distillation +1

Mitigating Data Heterogeneity in Federated Learning with Data Augmentation

1 code implementation20 Jun 2022 Artur Back de Luca, Guojun Zhang, Xi Chen, YaoLiang Yu

Federated Learning (FL) is a prominent framework that enables training a centralized model while securing user privacy by fusing local, decentralized models.

Data Augmentation Domain Generalization +1

Robust One Round Federated Learning with Predictive Space Bayesian Inference

1 code implementation20 Jun 2022 Mohsin Hasan, Zehao Zhang, Kaiyang Guo, Mahdi Karami, Guojun Zhang, Xi Chen, Pascal Poupart

In contrast, our method performs the aggregation on the predictive posteriors, which are typically easier to approximate owing to the low-dimensionality of the output space.

Bayesian Inference Federated Learning

DP$^2$-VAE: Differentially Private Pre-trained Variational Autoencoders

no code implementations5 Aug 2022 Dihong Jiang, Guojun Zhang, Mahdi Karami, Xi Chen, Yunfeng Shao, YaoLiang Yu

Similar to other differentially private (DP) learners, the major challenge for DPGM is also how to achieve a subtle balance between utility and privacy.

Label Alignment Regularization for Distribution Shift

no code implementations27 Nov 2022 Ehsan Imani, Guojun Zhang, Runjia Li, Jun Luo, Pascal Poupart, Philip H. S. Torr, Yangchen Pan

Recent work has highlighted the label alignment property (LAP) in supervised learning, where the vector of all labels in the dataset is mostly in the span of the top few singular vectors of the data matrix.

Representation Learning Sentiment Analysis +1

Private GANs, Revisited

1 code implementation6 Feb 2023 Alex Bie, Gautam Kamath, Guojun Zhang

We show that the canonical approach for training differentially private GANs -- updating the discriminator with differentially private stochastic gradient descent (DPSGD) -- can yield significantly improved results after modifications to training.

Image Generation

Mathematical Challenges in Deep Learning

no code implementations24 Mar 2023 Vahid Partovi Nia, Guojun Zhang, Ivan Kobyzev, Michael R. Metel, Xinlin Li, Ke Sun, Sobhan Hemati, Masoud Asgharian, Linglong Kong, Wulong Liu, Boxing Chen

Deep models are dominating the artificial intelligence (AI) industry since the ImageNet challenge in 2012.

Understanding the Role of Layer Normalization in Label-Skewed Federated Learning

1 code implementation18 Aug 2023 Guojun Zhang, Mahdi Beitollahi, Alex Bie, Xi Chen

In this work, we reveal the profound connection between layer normalization and the label shift problem in federated learning.

Federated Learning

Understanding Hessian Alignment for Domain Generalization

1 code implementation ICCV 2023 Sobhan Hemati, Guojun Zhang, Amir Estiri, Xi Chen

We validate the OOD generalization ability of proposed methods in different scenarios, including transferability, severe correlation shift, label shift and diversity shift.

Autonomous Vehicles Domain Generalization +1

Preventing Arbitrarily High Confidence on Far-Away Data in Point-Estimated Discriminative Neural Networks

1 code implementation7 Nov 2023 Ahmad Rashid, Serena Hacker, Guojun Zhang, Agustinus Kristiadi, Pascal Poupart

For instance, ReLU networks - a popular class of neural network architectures - have been shown to almost always yield high confidence predictions when the test data are far away from the training set, even when they are trained with OOD data.

Cross Domain Generative Augmentation: Domain Generalization with Latent Diffusion Models

no code implementations8 Dec 2023 Sobhan Hemati, Mahdi Beitollahi, Amir Hossein Estiri, Bassel Al Omari, Xi Chen, Guojun Zhang

The VRM reduces the estimation error in ERM by replacing the point-wise kernel estimates with a more precise estimation of true data distribution that reduces the gap between data points \textbf{within each domain}.

Adversarial Robustness Data Augmentation +1

Calibrated One Round Federated Learning with Bayesian Inference in the Predictive Space

2 code implementations15 Dec 2023 Mohsin Hasan, Guojun Zhang, Kaiyang Guo, Xi Chen, Pascal Poupart

To improve scalability for larger models, one common Bayesian approach is to approximate the global predictive posterior by multiplying local predictive posteriors.

Bayesian Inference Federated Learning

DFML: Decentralized Federated Mutual Learning

no code implementations2 Feb 2024 Yasser H. Khalil, Amir H. Estiri, Mahdi Beitollahi, Nader Asadi, Sobhan Hemati, Xu Li, Guojun Zhang, Xi Chen

In the realm of real-world devices, centralized servers in Federated Learning (FL) present challenges including communication bottlenecks and susceptibility to a single point of failure.

Federated Learning

Parametric Feature Transfer: One-shot Federated Learning with Foundation Models

no code implementations2 Feb 2024 Mahdi Beitollahi, Alex Bie, Sobhan Hemati, Leo Maxime Brunswic, Xu Li, Xi Chen, Guojun Zhang

This paper introduces FedPFT (Federated Learning with Parametric Feature Transfer), a methodology that harnesses the transferability of foundation models to enhance both accuracy and communication efficiency in one-shot FL.

Federated Learning

Robust Multi-Task Learning with Excess Risks

no code implementations3 Feb 2024 Yifei He, Shiji Zhou, Guojun Zhang, Hyokun Yun, Yi Xu, Belinda Zeng, Trishul Chilimbi, Han Zhao

To overcome this limitation, we propose Multi-Task Learning with Excess Risks (ExcessMTL), an excess risk-based task balancing method that updates the task weights by their distances to convergence instead.

Multi-Task Learning

$f$-MICL: Understanding and Generalizing InfoNCE-based Contrastive Learning

no code implementations15 Feb 2024 Yiwei Lu, Guojun Zhang, Sun Sun, Hongyu Guo, YaoLiang Yu

In self-supervised contrastive learning, a widely-adopted objective function is InfoNCE, which uses the heuristic cosine similarity for the representation comparison, and is closely related to maximizing the Kullback-Leibler (KL)-based mutual information.

Contrastive Learning

Does Combining Parameter-efficient Modules Improve Few-shot Transfer Accuracy?

no code implementations23 Feb 2024 Nader Asadi, Mahdi Beitollahi, Yasser Khalil, Yinchuan Li, Guojun Zhang, Xi Chen

Parameter-efficient fine-tuning stands as the standard for efficiently fine-tuning large language and vision models on downstream tasks.

Cannot find the paper you are looking for? You can Submit a new open access paper.