1 code implementation • 27 Sep 2024 • Qian Feng, Dawei Zhou, Hanbin Zhao, Chao Zhang, Hui Qian
To promote cross-task knowledge facilitation and form an effective and efficient prompt sets pool, we propose a plug-in module in the former stage to \textbf{Learn Whether to Grow (LW2G)} based on the disparities between tasks.
1 code implementation • 15 Jul 2024 • Junhong Lin, Xiaojie Guo, Shuaicheng Zhang, Dawei Zhou, Yada Zhu, Julian Shun
However, existing benchmarks for graph learning often focus on heterogeneous graphs with homophily or homogeneous graphs with heterophily, leaving a gap in understanding how methods perform on graphs that are both heterogeneous and heterophilic.
1 code implementation • 7 Jun 2024 • Zheng Huang, Qihui Yang, Dawei Zhou, Yujun Yan
Although most graph neural networks (GNNs) can operate on graphs of any size, their classification performance often declines on graphs larger than those encountered during training.
1 code implementation • 2 Jun 2024 • Jiacheng Zhang, Feng Liu, Dawei Zhou, Jingfeng Zhang, Tongliang Liu
However, in this paper, we discover that not all pixels contribute equally to the accuracy on AEs (i. e., robustness) and accuracy on natural images (i. e., accuracy).
no code implementations • 18 Mar 2024 • Baoyu Jing, Dawei Zhou, Kan Ren, Carl Yang
In this paper, we first revisit spatiotemporal time series imputation from a causal perspective and show how to block the confounders via the frontdoor adjustment.
no code implementations • 26 Jan 2024 • Nuoyan Zhou, Dawei Zhou, Decheng Liu, Xinbo Gao, Nannan Wang
Deep neural networks are vulnerable to adversarial samples.
1 code implementation • 5 Oct 2023 • Nuoyan Zhou, Nannan Wang, Decheng Liu, Dawei Zhou, Xinbo Gao
Deep neural networks are vulnerable to adversarial noise.
Ranked #1 on Adversarial Attack on CIFAR-10 (Attack: AutoAttack metric)
no code implementations • 14 Sep 2023 • Liangchen Liu, Nannan Wang, Dawei Zhou, Xinbo Gao, Decheng Liu, Xi Yang, Tongliang Liu
This paper targets a novel trade-off problem in generalizable prompt learning for vision-language models (VLM), i. e., improving the performance on unseen classes while maintaining the performance on seen classes.
1 code implementation • 19 Jul 2023 • Longfeng Wu, Bowen Lei, Dongkuan Xu, Dawei Zhou
In particular, to quantify the uncertainties in RCA, we develop a node-level uncertainty quantification algorithm to model the overlapping support regions with high uncertainty; to handle the rarity of minority classes in miscalibration calculation, we generalize the distribution-based calibration metric to the instance level and propose the first individual calibration measurement on graphs named Expected Individual Calibration Error (EICE).
1 code implementation • 17 Jul 2023 • Haohui Wang, Weijie Guan, Jianpeng Chen, Zi Wang, Dawei Zhou
To achieve this, we develop the most comprehensive (to the best of our knowledge) long-tailed learning benchmark named HeroLT, which integrates 13 state-of-the-art algorithms and 6 evaluation metrics on 14 real-world benchmark datasets across 4 tasks from 3 domains.
no code implementations • 25 Jun 2023 • Shuaicheng Zhang, Haohui Wang, Si Zhang, Dawei Zhou
While graph heterophily has been extensively studied in recent years, a fundamental research question largely remains nascent: How and to what extent will graph heterophily affect the prediction performance of graph neural networks (GNNs)?
no code implementations • 17 May 2023 • Haohui Wang, Baoyu Jing, Kaize Ding, Yada Zhu, Wei Cheng, Si Zhang, Yonghui Fan, Liqing Zhang, Dawei Zhou
To bridge this gap, we propose a generalization bound for long-tail classification on graphs by formulating the problem in the fashion of multi-task learning, i. e., each task corresponds to the prediction of one particular class.
1 code implementation • 1 May 2023 • Yue Wu, Shuaicheng Zhang, Wenchao Yu, Yanchi Liu, Quanquan Gu, Dawei Zhou, Haifeng Chen, Wei Cheng
The recent trend towards Personalized Federated Learning (PFL) has garnered significant attention as it allows for the training of models that are tailored to each client while maintaining data privacy.
1 code implementation • 1 May 2023 • Haohui Wang, Yuzhen Mao, Yujun Yan, Yaoqing Yang, Jianhui Sun, Kevin Choi, Balaji Veeramani, Alison Hu, Edward Bowen, Tyler Cody, Dawei Zhou
To answer it, we propose a generalization bound for dynamic non-IID transfer learning on graphs, which implies the generalization performance is dominated by domain evolution and domain discrepancy between source and target graphs.
no code implementations • 30 Mar 2023 • Lecheng Zheng, Dawei Zhou, Hanghang Tong, Jiejun Xu, Yada Zhu, Jingrui He
In addition, we propose a generic context sampling strategy for graph generative models, which is proven to be capable of fairly capturing the contextual information of each group with a high probability.
1 code implementation • 9 Dec 2022 • Yuzhen Mao, Jianhui Sun, Dawei Zhou
Given a resource-rich source graph and a resource-scarce target graph, how can we effectively transfer knowledge across graphs and ensure a good generalization performance?
1 code implementation • 9 Dec 2022 • Longfeng Wu, Yao Zhou, Dawei Zhou
Finally, we further propose a hybrid network that is jointly optimized for learning a more generic product representation.
1 code implementation • 19 Oct 2022 • Linfeng Liu, Xu Han, Dawei Zhou, Li-Ping Liu
In this work, we convert graph pruning to a problem of node relabeling and then relax it to a differentiable problem.
no code implementations • 4 Oct 2022 • Chaojian Yu, Dawei Zhou, Li Shen, Jun Yu, Bo Han, Mingming Gong, Nannan Wang, Tongliang Liu
Firstly, applying a pre-specified perturbation budget on networks of various model capacities will yield divergent degree of robustness disparity between natural and robust accuracies, which deviates from robust network's desideratum.
1 code implementation • ICCV 2023 • Zhigang Su, Dawei Zhou, Nannan Wangu, Decheng Li, Zhen Wang, Xinbo Gao
Growing leakage and misuse of visual information raise security and privacy concerns, which promotes the development of information protection.
1 code implementation • 21 Aug 2022 • Dawei Zhou, Lecheng Zheng, Dongqi Fu, Jiawei Han, Jingrui He
To comprehend heterogeneous graph signals at different granularities, we propose a curriculum learning paradigm that automatically re-weighs graph signals in order to ensure a good generalization in the target domain.
1 code implementation • 25 Jul 2022 • Dawei Zhou, Nannan Wang, Xinbo Gao, Bo Han, Xiaoyu Wang, Yibing Zhan, Tongliang Liu
To alleviate this negative effect, in this paper, we investigate the dependence between outputs of the target model and input adversarial samples from the perspective of information theory, and propose an adversarial defense method.
no code implementations • 29 Sep 2021 • Dawei Zhou, Nannan Wang, Bo Han, Tongliang Liu
Deep neural networks have been demonstrated to be vulnerable to adversarial noise, promoting the development of defense against adversarial attacks.
1 code implementation • 21 Sep 2021 • Dawei Zhou, Nannan Wang, Bo Han, Tongliang Liu
Deep neural networks have been demonstrated to be vulnerable to adversarial noise, promoting the development of defense against adversarial attacks.
no code implementations • 10 Jun 2021 • Dawei Zhou, Nannan Wang, Xinbo Gao, Bo Han, Jun Yu, Xiaoyu Wang, Tongliang Liu
However, pre-processing methods may suffer from the robustness degradation effect, in which the defense reduces rather than improving the adversarial robustness of a target model in a white-box setting.
no code implementations • 9 Jun 2021 • Dawei Zhou, Tongliang Liu, Bo Han, Nannan Wang, Chunlei Peng, Xinbo Gao
However, given the continuously evolving attacks, models trained on seen types of adversarial examples generally cannot generalize well to unseen types of adversarial examples.
no code implementations • ICCV 2021 • Dawei Zhou, Nannan Wang, Chunlei Peng, Xinbo Gao, Xiaoyu Wang, Jun Yu, Tongliang Liu
Then, we train a denoising model to minimize the distances between the adversarial examples and the natural examples in the class activation feature space.
no code implementations • 1 Jan 2021 • Dawei Zhou, Tongliang Liu, Bo Han, Nannan Wang, Xinbo Gao
Motivated by this observation, we propose a defense framework ADD-Defense, which extracts the invariant information called \textit{perturbation-invariant representation} (PIR) to defend against widespread adversarial examples.