Search Results for author: Yonggang Zhang

Found 35 papers, 21 papers with code

Dual-Path Distillation: A Unified Framework to Improve Black-Box Attacks

no code implementations ICML 2020 Yonggang Zhang, Ya Li, Tongliang Liu, Xinmei Tian

To obtain sufficient knowledge for crafting adversarial examples, previous methods query the target model with inputs that are perturbed with different searching directions.

Enhancing Target-unspecific Tasks through a Features Matrix

no code implementations6 May 2025 Fangming Cui, Yonggang Zhang, Xuan Wang, Xinmei Tian, Jun Yu

Recent developments in prompt learning of large vision-language models have significantly improved performance in target-specific tasks.

General Knowledge Prompt Learning

Leveraging Submodule Linearity Enhances Task Arithmetic Performance in LLMs

1 code implementation15 Apr 2025 Rui Dai, Sile Hu, Xu Shen, Yonggang Zhang, Xinmei Tian, Jieping Ye

Task arithmetic is a straightforward yet highly effective strategy for model merging, enabling the resultant model to exhibit multi-task capabilities.

Task Arithmetic

Generalizable Prompt Learning of CLIP: A Brief Overview

no code implementations3 Mar 2025 Fangming Cui, Yonggang Zhang, Xuan Wang, Xule Wang, Liang Xiao

Existing vision-language models (VLMs) such as CLIP have showcased an impressive capability to generalize well across various downstream tasks.

Prompt Learning

Detecting Discrepancies Between AI-Generated and Natural Images Using Uncertainty

no code implementations8 Dec 2024 Jun Nie, Yonggang Zhang, Tongliang Liu, Yiu-ming Cheung, Bo Han, Xinmei Tian

In this work, we propose a novel approach for detecting AI-generated images by leveraging predictive uncertainty to mitigate misuse and associated risks.

Enhancing Multiple Dimensions of Trustworthiness in LLMs via Sparse Activation Control

no code implementations4 Nov 2024 Yuxin Xiao, Chaoqun Wan, Yonggang Zhang, Wenxiao Wang, Binbin Lin, Xiaofei He, Xu Shen, Jieping Ye

This technique leverages semantic features to control the representation of LLM's intermediate hidden states, enabling the model to meet specific requirements such as increased honesty or heightened safety awareness.

FuseFL: One-Shot Federated Learning through the Lens of Causality with Progressive Model Fusion

1 code implementation27 Oct 2024 Zhenheng Tang, Yonggang Zhang, Peijie Dong, Yiu-ming Cheung, Amelie Chi Zhou, Bo Han, Xiaowen Chu

In this work, we provide a causal view to find that this performance drop of OFL methods comes from the isolation problem, which means that local isolatedly trained models in OFL may easily fit to spurious correlations due to the data heterogeneity.

Federated Learning

Interpreting and Improving Large Language Models in Arithmetic Calculation

no code implementations3 Sep 2024 Wei zhang, Chaoqun Wan, Yonggang Zhang, Yiu-ming Cheung, Xinmei Tian, Xu Shen, Jieping Ye

In this work, we delve into uncovering a specific mechanism by which LLMs execute calculations.

From Yes-Men to Truth-Tellers: Addressing Sycophancy in Large Language Models with Pinpoint Tuning

no code implementations3 Sep 2024 Wei Chen, Zhen Huang, Liang Xie, Binbin Lin, Houqiang Li, Le Lu, Xinmei Tian, Deng Cai, Yonggang Zhang, Wenxiao Wang, Xu Shen, Jieping Ye

Recent works propose to employ supervised fine-tuning (SFT) to mitigate the sycophancy issue, while it typically leads to the degeneration of LLMs' general capability.

MOS: Model Synergy for Test-Time Adaptation on LiDAR-Based 3D Object Detection

no code implementations21 Jun 2024 Zhuoxiao Chen, Junjie Meng, Mahsa Baktashmotlagh, Yonggang Zhang, Zi Huang, Yadan Luo

Specifically, we propose a Model Synergy (MOS) strategy that dynamically selects historical checkpoints with diverse knowledge and assembles them to best accommodate the current test batch.

3D Object Detection object-detection +1

A 7K Parameter Model for Underwater Image Enhancement based on Transmission Map Prior

1 code implementation25 May 2024 Fuheng Zhou, Dikai Wei, Ye Fan, Yulong Huang, Yonggang Zhang

Although deep learning based models for underwater image enhancement have achieved good performance, they face limitations in both lightweight and effectiveness, which prevents their deployment and application on resource-constrained platforms.

Data Compression Decoder +2

NoiseDiffusion: Correcting Noise for Image Interpolation with Diffusion Models beyond Spherical Linear Interpolation

1 code implementation13 Mar 2024 Pengfei Zheng, Yonggang Zhang, Zhen Fang, Tongliang Liu, Defu Lian, Bo Han

Hence, NoiseDiffusion performs interpolation within the noisy image space and injects raw images into these noisy counterparts to address the challenge of information loss.

Denoising

ConjNorm: Tractable Density Estimation for Out-of-Distribution Detection

no code implementations27 Feb 2024 Bo Peng, Yadan Luo, Yonggang Zhang, Yixuan Li, Zhen Fang

Extensive experiments across OOD detection benchmarks empirically demonstrate that our proposed \textsc{ConjNorm} has established a new state-of-the-art in a variety of OOD detection setups, outperforming the current best method by up to 13. 25$\%$ and 28. 19$\%$ (FPR95) on CIFAR-100 and ImageNet-1K, respectively.

Density Estimation Out-of-Distribution Detection +1

Enhancing One-Shot Federated Learning Through Data and Ensemble Co-Boosting

1 code implementation23 Feb 2024 Rong Dai, Yonggang Zhang, Ang Li, Tongliang Liu, Xun Yang, Bo Han

These hard samples are then employed to promote the quality of the ensemble model by adjusting the ensembling weights for each client model.

Federated Learning

Robust Training of Federated Models with Extremely Label Deficiency

2 code implementations22 Feb 2024 Yonggang Zhang, Zhiqin Yang, Xinmei Tian, Nannan Wang, Tongliang Liu, Bo Han

Federated semi-supervised learning (FSSL) has emerged as a powerful paradigm for collaboratively training machine learning models using distributed data with label deficiency.

FedImpro: Measuring and Improving Client Update in Federated Learning

no code implementations10 Feb 2024 Zhenheng Tang, Yonggang Zhang, Shaohuai Shi, Xinmei Tian, Tongliang Liu, Bo Han, Xiaowen Chu

First, we analyze the generalization contribution of local training and conclude that this generalization contribution is bounded by the conditional Wasserstein distance between the data distribution of different clients.

Federated Learning

Federated Learning with Extremely Noisy Clients via Negative Distillation

1 code implementation20 Dec 2023 Yang Lu, Lin Chen, Yonggang Zhang, Yiliang Zhang, Bo Han, Yiu-ming Cheung, Hanzi Wang

The model trained on noisy labels serves as a `bad teacher' in knowledge distillation, aiming to decrease the risk of providing incorrect information.

Federated Learning Knowledge Distillation

Learning to Augment Distributions for Out-of-Distribution Detection

1 code implementation NeurIPS 2023 Qizhou Wang, Zhen Fang, Yonggang Zhang, Feng Liu, Yixuan Li, Bo Han

Accordingly, we propose Distributional-Augmented OOD Learning (DAL), alleviating the OOD distribution discrepancy by crafting an OOD distribution set that contains all distributions in a Wasserstein ball centered on the auxiliary OOD distribution.

Learning Theory Out-of-Distribution Detection

Continual Named Entity Recognition without Catastrophic Forgetting

1 code implementation23 Oct 2023 Duzhen Zhang, Wei Cong, Jiahua Dong, Yahan Yu, Xiuyi Chen, Yonggang Zhang, Zhen Fang

This issue is intensified in CNER due to the consolidation of old entity types from previous steps into the non-entity type at each step, leading to what is known as the semantic shift problem of the non-entity type.

Continual Named Entity Recognition named-entity-recognition +1

Invariant Learning via Probability of Sufficient and Necessary Causes

1 code implementation NeurIPS 2023 Mengyue Yang, Zhen Fang, Yonggang Zhang, Yali Du, Furui Liu, Jean-Francois Ton, Jianhong Wang, Jun Wang

To capture the information of sufficient and necessary causes, we employ a classical concept, the probability of sufficiency and necessary causes (PNS), which indicates the probability of whether one is the necessary and sufficient cause.

Moderately Distributional Exploration for Domain Generalization

1 code implementation27 Apr 2023 Rui Dai, Yonggang Zhang, Zhen Fang, Bo Han, Xinmei Tian

We show that MODE can endow models with provable generalization performance on unknown target domains.

Domain Generalization

Hard Sample Matters a Lot in Zero-Shot Quantization

1 code implementation CVPR 2023 Huantong Li, Xiangmiao Wu, Fanbing Lv, Daihai Liao, Thomas H. Li, Yonggang Zhang, Bo Han, Mingkui Tan

Nonetheless, we find that the synthetic samples constructed in existing ZSQ methods can be easily fitted by models.

Quantization

FedML Parrot: A Scalable Federated Learning System via Heterogeneity-aware Scheduling on Sequential and Hierarchical Training

1 code implementation3 Mar 2023 Zhenheng Tang, Xiaowen Chu, Ryan Yide Ran, Sunwoo Lee, Shaohuai Shi, Yonggang Zhang, Yuxin Wang, Alex Qiaozhong Liang, Salman Avestimehr, Chaoyang He

It improves the training efficiency, remarkably relaxes the requirements on the hardware, and supports efficient large-scale FL experiments with stateful clients by: (1) sequential training clients on devices; (2) decomposing original aggregation into local and global aggregation on devices and server respectively; (3) scheduling tasks to mitigate straggler problems and enhance computing utility; (4) distributed client state manager to support various FL algorithms.

Federated Learning Scheduling

Watermarking for Out-of-distribution Detection

1 code implementation27 Oct 2022 Qizhou Wang, Feng Liu, Yonggang Zhang, Jing Zhang, Chen Gong, Tongliang Liu, Bo Han

Out-of-distribution (OOD) detection aims to identify OOD data based on representations extracted from well-trained deep models.

Out-of-Distribution Detection

Towards Lightweight Black-Box Attacks against Deep Neural Networks

1 code implementation29 Sep 2022 Chenghao Sun, Yonggang Zhang, Wan Chaoqun, Qizhou Wang, Ya Li, Tongliang Liu, Bo Han, Xinmei Tian

As it is hard to mitigate the approximation error with few available samples, we propose Error TransFormer (ETF) for lightweight attacks.

Virtual Homogeneity Learning: Defending against Data Heterogeneity in Federated Learning

1 code implementation6 Jun 2022 Zhenheng Tang, Yonggang Zhang, Shaohuai Shi, Xin He, Bo Han, Xiaowen Chu

In federated learning (FL), model performance typically suffers from client drift induced by data heterogeneity, and mainstream works focus on correcting client drift.

Federated Learning

Prompt Distribution Learning

no code implementations CVPR 2022 Yuning Lu, Jianzhuang Liu, Yonggang Zhang, Yajing Liu, Xinmei Tian

We present prompt distribution learning for effectively adapting a pre-trained vision-language model to address downstream recognition tasks.

Language Modeling Language Modelling

Understanding and Improving Graph Injection Attack by Promoting Unnoticeability

1 code implementation ICLR 2022 Yongqiang Chen, Han Yang, Yonggang Zhang, Kaili Ma, Tongliang Liu, Bo Han, James Cheng

Recently Graph Injection Attack (GIA) emerges as a practical attack scenario on Graph Neural Networks (GNNs), where the adversary can merely inject few malicious nodes instead of modifying existing nodes or edges, i. e., Graph Modification Attack (GMA).

Learning Causally Invariant Representations for Out-of-Distribution Generalization on Graphs

3 code implementations11 Feb 2022 Yongqiang Chen, Yonggang Zhang, Yatao Bian, Han Yang, Kaili Ma, Binghui Xie, Tongliang Liu, Bo Han, James Cheng

Despite recent success in using the invariance principle for out-of-distribution (OOD) generalization on Euclidean data (e. g., images), studies on graph data are still limited.

Drug Discovery Graph Learning +1

Meta Convolutional Neural Networks for Single Domain Generalization

no code implementations CVPR 2022 Chaoqun Wan, Xu Shen, Yonggang Zhang, Zhiheng Yin, Xinmei Tian, Feng Gao, Jianqiang Huang, Xian-Sheng Hua

Taking meta features as reference, we propose compositional operations to eliminate irrelevant features of local convolutional features by an addressing process and then to reformulate the convolutional feature maps as a composition of related meta features.

Photo to Rest Generalization

Class-Disentanglement and Applications in Adversarial Detection and Defense

no code implementations NeurIPS 2021 Kaiwen Yang, Tianyi Zhou, Yonggang Zhang, Xinmei Tian, DaCheng Tao

In this paper, we propose ''class-disentanglement'' that trains a variational autoencoder $G(\cdot)$ to extract this class-dependent information as $x - G(x)$ via a trade-off between reconstructing $x$ by $G(x)$ and classifying $x$ by $D(x-G(x))$, where the former competes with the latter in decomposing $x$ so the latter retains only necessary information for classification in $x-G(x)$.

Adversarial Defense Disentanglement

Cannot find the paper you are looking for? You can Submit a new open access paper.