no code implementations • ICML 2020 • Yonggang Zhang, Ya Li, Tongliang Liu, Xinmei Tian
To obtain sufficient knowledge for crafting adversarial examples, previous methods query the target model with inputs that are perturbed with different searching directions.
1 code implementation • 23 Oct 2023 • Duzhen Zhang, Wei Cong, Jiahua Dong, Yahan Yu, Xiuyi Chen, Yonggang Zhang, Zhen Fang
This issue is intensified in CNER due to the consolidation of old entity types from previous steps into the non-entity type at each step, leading to what is known as the semantic shift problem of the non-entity type.
Continual Named Entity Recognition
named-entity-recognition
+1
no code implementations • 8 Oct 2023 • Zhiqin Yang, Yonggang Zhang, Yu Zheng, Xinmei Tian, Hao Peng, Tongliang Liu, Bo Han
Comprehensive experiments demonstrate the efficacy of FedFed in promoting model performance.
1 code implementation • 27 Apr 2023 • Rui Dai, Yonggang Zhang, Zhen Fang, Bo Han, Xinmei Tian
We show that MODE can endow models with provable generalization performance on unknown target domains.
1 code implementation • CVPR 2023 • Huantong Li, Xiangmiao Wu, Fanbing Lv, Daihai Liao, Thomas H. Li, Yonggang Zhang, Bo Han, Mingkui Tan
Nonetheless, we find that the synthetic samples constructed in existing ZSQ methods can be easily fitted by models.
1 code implementation • 3 Mar 2023 • Zhenheng Tang, Xiaowen Chu, Ryan Yide Ran, Sunwoo Lee, Shaohuai Shi, Yonggang Zhang, Yuxin Wang, Alex Qiaozhong Liang, Salman Avestimehr, Chaoyang He
It improves the training efficiency, remarkably relaxes the requirements on the hardware, and supports efficient large-scale FL experiments with stateful clients by: (1) sequential training clients on devices; (2) decomposing original aggregation into local and global aggregation on devices and server respectively; (3) scheduling tasks to mitigate straggler problems and enhance computing utility; (4) distributed client state manager to support various FL algorithms.
1 code implementation • 27 Oct 2022 • Qizhou Wang, Feng Liu, Yonggang Zhang, Jing Zhang, Chen Gong, Tongliang Liu, Bo Han
Out-of-distribution (OOD) detection aims to identify OOD data based on representations extracted from well-trained deep models.
Ranked #19 on
Out-of-Distribution Detection
on ImageNet-1k vs Places
1 code implementation • 29 Sep 2022 • Chenghao Sun, Yonggang Zhang, Wan Chaoqun, Qizhou Wang, Ya Li, Tongliang Liu, Bo Han, Xinmei Tian
As it is hard to mitigate the approximation error with few available samples, we propose Error TransFormer (ETF) for lightweight attacks.
2 code implementations • 15 Jun 2022 • Yongqiang Chen, Kaiwen Zhou, Yatao Bian, Binghui Xie, Bingzhe Wu, Yonggang Zhang, Kaili Ma, Han Yang, Peilin Zhao, Bo Han, James Cheng
Recently, there has been a growing surge of interest in enabling machine learning systems to generalize well to Out-of-Distribution (OOD) data.
1 code implementation • 6 Jun 2022 • Zhenheng Tang, Yonggang Zhang, Shaohuai Shi, Xin He, Bo Han, Xiaowen Chu
In federated learning (FL), model performance typically suffers from client drift induced by data heterogeneity, and mainstream works focus on correcting client drift.
no code implementations • CVPR 2022 • Yuning Lu, Jianzhuang Liu, Yonggang Zhang, Yajing Liu, Xinmei Tian
We present prompt distribution learning for effectively adapting a pre-trained vision-language model to address downstream recognition tasks.
1 code implementation • ICLR 2022 • Yongqiang Chen, Han Yang, Yonggang Zhang, Kaili Ma, Tongliang Liu, Bo Han, James Cheng
Recently Graph Injection Attack (GIA) emerges as a practical attack scenario on Graph Neural Networks (GNNs), where the adversary can merely inject few malicious nodes instead of modifying existing nodes or edges, i. e., Graph Modification Attack (GMA).
3 code implementations • 11 Feb 2022 • Yongqiang Chen, Yonggang Zhang, Yatao Bian, Han Yang, Kaili Ma, Binghui Xie, Tongliang Liu, Bo Han, James Cheng
Despite recent success in using the invariance principle for out-of-distribution (OOD) generalization on Euclidean data (e. g., images), studies on graph data are still limited.
no code implementations • CVPR 2022 • Chaoqun Wan, Xu Shen, Yonggang Zhang, Zhiheng Yin, Xinmei Tian, Feng Gao, Jianqiang Huang, Xian-Sheng Hua
Taking meta features as reference, we propose compositional operations to eliminate irrelevant features of local convolutional features by an addressing process and then to reformulate the convolutional feature maps as a composition of related meta features.
no code implementations • NeurIPS 2021 • Kaiwen Yang, Tianyi Zhou, Yonggang Zhang, Xinmei Tian, DaCheng Tao
In this paper, we propose ''class-disentanglement'' that trains a variational autoencoder $G(\cdot)$ to extract this class-dependent information as $x - G(x)$ via a trade-off between reconstructing $x$ by $G(x)$ and classifying $x$ by $D(x-G(x))$, where the former competes with the latter in decomposing $x$ so the latter retains only necessary information for classification in $x-G(x)$.
1 code implementation • ICLR 2022 • Yonggang Zhang, Mingming Gong, Tongliang Liu, Gang Niu, Xinmei Tian, Bo Han, Bernhard Schölkopf, Kun Zhang
The adversarial vulnerability of deep neural networks has attracted significant attention in machine learning.