Search Results for author: Yonggang Zhang

Found 24 papers, 18 papers with code

Dual-Path Distillation: A Unified Framework to Improve Black-Box Attacks

no code implementations ICML 2020 Yonggang Zhang, Ya Li, Tongliang Liu, Xinmei Tian

To obtain sufficient knowledge for crafting adversarial examples, previous methods query the target model with inputs that are perturbed with different searching directions.

NoiseDiffusion: Correcting Noise for Image Interpolation with Diffusion Models beyond Spherical Linear Interpolation

1 code implementation13 Mar 2024 Pengfei Zheng, Yonggang Zhang, Zhen Fang, Tongliang Liu, Defu Lian, Bo Han

Hence, NoiseDiffusion performs interpolation within the noisy image space and injects raw images into these noisy counterparts to address the challenge of information loss.

Denoising

ConjNorm: Tractable Density Estimation for Out-of-Distribution Detection

no code implementations27 Feb 2024 Bo Peng, Yadan Luo, Yonggang Zhang, Yixuan Li, Zhen Fang

Extensive experiments across OOD detection benchmarks empirically demonstrate that our proposed \textsc{ConjNorm} has established a new state-of-the-art in a variety of OOD detection setups, outperforming the current best method by up to 13. 25$\%$ and 28. 19$\%$ (FPR95) on CIFAR-100 and ImageNet-1K, respectively.

Density Estimation Out-of-Distribution Detection +1

Enhancing One-Shot Federated Learning Through Data and Ensemble Co-Boosting

1 code implementation23 Feb 2024 Rong Dai, Yonggang Zhang, Ang Li, Tongliang Liu, Xun Yang, Bo Han

These hard samples are then employed to promote the quality of the ensemble model by adjusting the ensembling weights for each client model.

Federated Learning

Robust Training of Federated Models with Extremely Label Deficiency

2 code implementations22 Feb 2024 Yonggang Zhang, Zhiqin Yang, Xinmei Tian, Nannan Wang, Tongliang Liu, Bo Han

Federated semi-supervised learning (FSSL) has emerged as a powerful paradigm for collaboratively training machine learning models using distributed data with label deficiency.

FedImpro: Measuring and Improving Client Update in Federated Learning

no code implementations10 Feb 2024 Zhenheng Tang, Yonggang Zhang, Shaohuai Shi, Xinmei Tian, Tongliang Liu, Bo Han, Xiaowen Chu

First, we analyze the generalization contribution of local training and conclude that this generalization contribution is bounded by the conditional Wasserstein distance between the data distribution of different clients.

Federated Learning

Federated Learning with Extremely Noisy Clients via Negative Distillation

1 code implementation20 Dec 2023 Yang Lu, Lin Chen, Yonggang Zhang, Yiliang Zhang, Bo Han, Yiu-ming Cheung, Hanzi Wang

The model trained on noisy labels serves as a `bad teacher' in knowledge distillation, aiming to decrease the risk of providing incorrect information.

Federated Learning Knowledge Distillation

Learning to Augment Distributions for Out-of-Distribution Detection

1 code implementation NeurIPS 2023 Qizhou Wang, Zhen Fang, Yonggang Zhang, Feng Liu, Yixuan Li, Bo Han

Accordingly, we propose Distributional-Augmented OOD Learning (DAL), alleviating the OOD distribution discrepancy by crafting an OOD distribution set that contains all distributions in a Wasserstein ball centered on the auxiliary OOD distribution.

Learning Theory Out-of-Distribution Detection

Continual Named Entity Recognition without Catastrophic Forgetting

1 code implementation23 Oct 2023 Duzhen Zhang, Wei Cong, Jiahua Dong, Yahan Yu, Xiuyi Chen, Yonggang Zhang, Zhen Fang

This issue is intensified in CNER due to the consolidation of old entity types from previous steps into the non-entity type at each step, leading to what is known as the semantic shift problem of the non-entity type.

Continual Named Entity Recognition named-entity-recognition +1

Invariant Learning via Probability of Sufficient and Necessary Causes

1 code implementation NeurIPS 2023 Mengyue Yang, Zhen Fang, Yonggang Zhang, Yali Du, Furui Liu, Jean-Francois Ton, Jianhong Wang, Jun Wang

To capture the information of sufficient and necessary causes, we employ a classical concept, the probability of sufficiency and necessary causes (PNS), which indicates the probability of whether one is the necessary and sufficient cause.

Moderately Distributional Exploration for Domain Generalization

1 code implementation27 Apr 2023 Rui Dai, Yonggang Zhang, Zhen Fang, Bo Han, Xinmei Tian

We show that MODE can endow models with provable generalization performance on unknown target domains.

Domain Generalization

Hard Sample Matters a Lot in Zero-Shot Quantization

1 code implementation CVPR 2023 Huantong Li, Xiangmiao Wu, Fanbing Lv, Daihai Liao, Thomas H. Li, Yonggang Zhang, Bo Han, Mingkui Tan

Nonetheless, we find that the synthetic samples constructed in existing ZSQ methods can be easily fitted by models.

Quantization

FedML Parrot: A Scalable Federated Learning System via Heterogeneity-aware Scheduling on Sequential and Hierarchical Training

1 code implementation3 Mar 2023 Zhenheng Tang, Xiaowen Chu, Ryan Yide Ran, Sunwoo Lee, Shaohuai Shi, Yonggang Zhang, Yuxin Wang, Alex Qiaozhong Liang, Salman Avestimehr, Chaoyang He

It improves the training efficiency, remarkably relaxes the requirements on the hardware, and supports efficient large-scale FL experiments with stateful clients by: (1) sequential training clients on devices; (2) decomposing original aggregation into local and global aggregation on devices and server respectively; (3) scheduling tasks to mitigate straggler problems and enhance computing utility; (4) distributed client state manager to support various FL algorithms.

Federated Learning Scheduling

Watermarking for Out-of-distribution Detection

1 code implementation27 Oct 2022 Qizhou Wang, Feng Liu, Yonggang Zhang, Jing Zhang, Chen Gong, Tongliang Liu, Bo Han

Out-of-distribution (OOD) detection aims to identify OOD data based on representations extracted from well-trained deep models.

Out-of-Distribution Detection

Towards Lightweight Black-Box Attacks against Deep Neural Networks

1 code implementation29 Sep 2022 Chenghao Sun, Yonggang Zhang, Wan Chaoqun, Qizhou Wang, Ya Li, Tongliang Liu, Bo Han, Xinmei Tian

As it is hard to mitigate the approximation error with few available samples, we propose Error TransFormer (ETF) for lightweight attacks.

Virtual Homogeneity Learning: Defending against Data Heterogeneity in Federated Learning

1 code implementation6 Jun 2022 Zhenheng Tang, Yonggang Zhang, Shaohuai Shi, Xin He, Bo Han, Xiaowen Chu

In federated learning (FL), model performance typically suffers from client drift induced by data heterogeneity, and mainstream works focus on correcting client drift.

Federated Learning

Prompt Distribution Learning

no code implementations CVPR 2022 Yuning Lu, Jianzhuang Liu, Yonggang Zhang, Yajing Liu, Xinmei Tian

We present prompt distribution learning for effectively adapting a pre-trained vision-language model to address downstream recognition tasks.

Language Modelling

Understanding and Improving Graph Injection Attack by Promoting Unnoticeability

1 code implementation ICLR 2022 Yongqiang Chen, Han Yang, Yonggang Zhang, Kaili Ma, Tongliang Liu, Bo Han, James Cheng

Recently Graph Injection Attack (GIA) emerges as a practical attack scenario on Graph Neural Networks (GNNs), where the adversary can merely inject few malicious nodes instead of modifying existing nodes or edges, i. e., Graph Modification Attack (GMA).

Learning Causally Invariant Representations for Out-of-Distribution Generalization on Graphs

3 code implementations11 Feb 2022 Yongqiang Chen, Yonggang Zhang, Yatao Bian, Han Yang, Kaili Ma, Binghui Xie, Tongliang Liu, Bo Han, James Cheng

Despite recent success in using the invariance principle for out-of-distribution (OOD) generalization on Euclidean data (e. g., images), studies on graph data are still limited.

Drug Discovery Graph Learning +1

Meta Convolutional Neural Networks for Single Domain Generalization

no code implementations CVPR 2022 Chaoqun Wan, Xu Shen, Yonggang Zhang, Zhiheng Yin, Xinmei Tian, Feng Gao, Jianqiang Huang, Xian-Sheng Hua

Taking meta features as reference, we propose compositional operations to eliminate irrelevant features of local convolutional features by an addressing process and then to reformulate the convolutional feature maps as a composition of related meta features.

Domain Generalization

Class-Disentanglement and Applications in Adversarial Detection and Defense

no code implementations NeurIPS 2021 Kaiwen Yang, Tianyi Zhou, Yonggang Zhang, Xinmei Tian, DaCheng Tao

In this paper, we propose ''class-disentanglement'' that trains a variational autoencoder $G(\cdot)$ to extract this class-dependent information as $x - G(x)$ via a trade-off between reconstructing $x$ by $G(x)$ and classifying $x$ by $D(x-G(x))$, where the former competes with the latter in decomposing $x$ so the latter retains only necessary information for classification in $x-G(x)$.

Adversarial Defense Disentanglement

Cannot find the paper you are looking for? You can Submit a new open access paper.