Search Results for author: Dawei Zhou

Found 28 papers, 16 papers with code

LW2G: Learning Whether to Grow for Prompt-based Continual Learning

1 code implementation27 Sep 2024 Qian Feng, Dawei Zhou, Hanbin Zhao, Chao Zhang, Hui Qian

To promote cross-task knowledge facilitation and form an effective and efficient prompt sets pool, we propose a plug-in module in the former stage to \textbf{Learn Whether to Grow (LW2G)} based on the disparities between tasks.

Continual Learning Retrieval

When Heterophily Meets Heterogeneity: New Graph Benchmarks and Effective Methods

1 code implementation15 Jul 2024 Junhong Lin, Xiaojie Guo, Shuaicheng Zhang, Dawei Zhou, Yada Zhu, Julian Shun

However, existing benchmarks for graph learning often focus on heterogeneous graphs with homophily or homogeneous graphs with heterophily, leaving a gap in understanding how methods perform on graphs that are both heterogeneous and heterophilic.

Graph Learning

Enhancing Size Generalization in Graph Neural Networks through Disentangled Representation Learning

1 code implementation7 Jun 2024 Zheng Huang, Qihui Yang, Dawei Zhou, Yujun Yan

Although most graph neural networks (GNNs) can operate on graphs of any size, their classification performance often declines on graphs larger than those encountered during training.

Representation Learning

Improving Accuracy-robustness Trade-off via Pixel Reweighted Adversarial Training

1 code implementation2 Jun 2024 Jiacheng Zhang, Feng Liu, Dawei Zhou, Jingfeng Zhang, Tongliang Liu

However, in this paper, we discover that not all pixels contribute equally to the accuracy on AEs (i. e., robustness) and accuracy on natural images (i. e., accuracy).

Robust classification

Causality-Aware Spatiotemporal Graph Neural Networks for Spatiotemporal Time Series Imputation

no code implementations18 Mar 2024 Baoyu Jing, Dawei Zhou, Kan Ren, Carl Yang

In this paper, we first revisit spatiotemporal time series imputation from a causal perspective and show how to block the confounders via the frontdoor adjustment.

Graph Neural Network Imputation +2

Gradient constrained sharpness-aware prompt learning for vision-language models

no code implementations14 Sep 2023 Liangchen Liu, Nannan Wang, Dawei Zhou, Xinbo Gao, Decheng Liu, Xi Yang, Tongliang Liu

This paper targets a novel trade-off problem in generalizable prompt learning for vision-language models (VLM), i. e., improving the performance on unseen classes while maintaining the performance on seen classes.

Towards Reliable Rare Category Analysis on Graphs via Individual Calibration

1 code implementation19 Jul 2023 Longfeng Wu, Bowen Lei, Dongkuan Xu, Dawei Zhou

In particular, to quantify the uncertainties in RCA, we develop a node-level uncertainty quantification algorithm to model the overlapping support regions with high uncertainty; to handle the rarity of minority classes in miscalibration calculation, we generalize the distribution-based calibration metric to the instance level and propose the first individual calibration measurement on graphs named Expected Individual Calibration Error (EICE).

Fraud Detection Network Intrusion Detection +1

HeroLT: Benchmarking Heterogeneous Long-Tailed Learning

1 code implementation17 Jul 2023 Haohui Wang, Weijie Guan, Jianpeng Chen, Zi Wang, Dawei Zhou

To achieve this, we develop the most comprehensive (to the best of our knowledge) long-tailed learning benchmark named HeroLT, which integrates 13 state-of-the-art algorithms and 6 evaluation metrics on 14 real-world benchmark datasets across 4 tasks from 3 domains.

Benchmarking

GPatcher: A Simple and Adaptive MLP Model for Alleviating Graph Heterophily

no code implementations25 Jun 2023 Shuaicheng Zhang, Haohui Wang, Si Zhang, Dawei Zhou

While graph heterophily has been extensively studied in recent years, a fundamental research question largely remains nascent: How and to what extent will graph heterophily affect the prediction performance of graph neural networks (GNNs)?

Node Classification

Mastering Long-Tail Complexity on Graphs: Characterization, Learning, and Generalization

no code implementations17 May 2023 Haohui Wang, Baoyu Jing, Kaize Ding, Yada Zhu, Wei Cheng, Si Zhang, Yonghui Fan, Liqing Zhang, Dawei Zhou

To bridge this gap, we propose a generalization bound for long-tail classification on graphs by formulating the problem in the fashion of multi-task learning, i. e., each task corresponds to the prediction of one particular class.

Classification Contrastive Learning +1

Personalized Federated Learning under Mixture of Distributions

1 code implementation1 May 2023 Yue Wu, Shuaicheng Zhang, Wenchao Yu, Yanchi Liu, Quanquan Gu, Dawei Zhou, Haifeng Chen, Wei Cheng

The recent trend towards Personalized Federated Learning (PFL) has garnered significant attention as it allows for the training of models that are tailored to each client while maintaining data privacy.

Personalized Federated Learning Uncertainty Quantification

EvoluNet: Advancing Dynamic Non-IID Transfer Learning on Graphs

1 code implementation1 May 2023 Haohui Wang, Yuzhen Mao, Yujun Yan, Yaoqing Yang, Jianhui Sun, Kevin Choi, Balaji Veeramani, Alison Hu, Edward Bowen, Tyler Cody, Dawei Zhou

To answer it, we propose a generalization bound for dynamic non-IID transfer learning on graphs, which implies the generalization performance is dominated by domain evolution and domain discrepancy between source and target graphs.

Transfer Learning

FairGen: Towards Fair Graph Generation

no code implementations30 Mar 2023 Lecheng Zheng, Dawei Zhou, Hanghang Tong, Jiejun Xu, Yada Zhu, Jingrui He

In addition, we propose a generic context sampling strategy for graph generative models, which is proven to be capable of fairly capturing the contextual information of each group with a high probability.

Data Augmentation Fairness +3

Augmenting Knowledge Transfer across Graphs

1 code implementation9 Dec 2022 Yuzhen Mao, Jianhui Sun, Dawei Zhou

Given a resource-rich source graph and a resource-scarce target graph, how can we effectively transfer knowledge across graphs and ensure a good generalization performance?

Domain Adaptation Transfer Learning

Towards High-Order Complementary Recommendation via Logical Reasoning Network

1 code implementation9 Dec 2022 Longfeng Wu, Yao Zhou, Dawei Zhou

Finally, we further propose a hybrid network that is jointly optimized for learning a more generic product representation.

Logical Reasoning Negation +2

Towards Accurate Subgraph Similarity Computation via Neural Graph Pruning

1 code implementation19 Oct 2022 Linfeng Liu, Xu Han, Dawei Zhou, Li-Ping Liu

In this work, we convert graph pruning to a problem of node relabeling and then relax it to a differentiable problem.

Strength-Adaptive Adversarial Training

no code implementations4 Oct 2022 Chaojian Yu, Dawei Zhou, Li Shen, Jun Yu, Bo Han, Mingming Gong, Nannan Wang, Tongliang Liu

Firstly, applying a pre-specified perturbation budget on networks of various model capacities will yield divergent degree of robustness disparity between natural and robust accuracies, which deviates from robust network's desideratum.

Adversarial Robustness Scheduling

Hiding Visual Information via Obfuscating Adversarial Perturbations

1 code implementation ICCV 2023 Zhigang Su, Dawei Zhou, Nannan Wangu, Decheng Li, Zhen Wang, Xinbo Gao

Growing leakage and misuse of visual information raise security and privacy concerns, which promotes the development of information protection.

Adversarial Attack De-identification +1

MentorGNN: Deriving Curriculum for Pre-Training GNNs

1 code implementation21 Aug 2022 Dawei Zhou, Lecheng Zheng, Dongqi Fu, Jiawei Han, Jingrui He

To comprehend heterogeneous graph signals at different granularities, we propose a curriculum learning paradigm that automatically re-weighs graph signals in order to ensure a good generalization in the target domain.

Domain Adaptation Graph Mining

Improving Adversarial Robustness via Mutual Information Estimation

1 code implementation25 Jul 2022 Dawei Zhou, Nannan Wang, Xinbo Gao, Bo Han, Xiaoyu Wang, Yibing Zhan, Tongliang Liu

To alleviate this negative effect, in this paper, we investigate the dependence between outputs of the target model and input adversarial samples from the perspective of information theory, and propose an adversarial defense method.

Adversarial Defense Adversarial Robustness +1

Modeling Adversarial Noise for Adversarial Defense

no code implementations29 Sep 2021 Dawei Zhou, Nannan Wang, Bo Han, Tongliang Liu

Deep neural networks have been demonstrated to be vulnerable to adversarial noise, promoting the development of defense against adversarial attacks.

Adversarial Defense

Modeling Adversarial Noise for Adversarial Training

1 code implementation21 Sep 2021 Dawei Zhou, Nannan Wang, Bo Han, Tongliang Liu

Deep neural networks have been demonstrated to be vulnerable to adversarial noise, promoting the development of defense against adversarial attacks.

Adversarial Defense

Improving White-box Robustness of Pre-processing Defenses via Joint Adversarial Training

no code implementations10 Jun 2021 Dawei Zhou, Nannan Wang, Xinbo Gao, Bo Han, Jun Yu, Xiaoyu Wang, Tongliang Liu

However, pre-processing methods may suffer from the robustness degradation effect, in which the defense reduces rather than improving the adversarial robustness of a target model in a white-box setting.

Adversarial Defense Adversarial Robustness

Towards Defending against Adversarial Examples via Attack-Invariant Features

no code implementations9 Jun 2021 Dawei Zhou, Tongliang Liu, Bo Han, Nannan Wang, Chunlei Peng, Xinbo Gao

However, given the continuously evolving attacks, models trained on seen types of adversarial examples generally cannot generalize well to unseen types of adversarial examples.

Adversarial Robustness

Removing Adversarial Noise in Class Activation Feature Space

no code implementations ICCV 2021 Dawei Zhou, Nannan Wang, Chunlei Peng, Xinbo Gao, Xiaoyu Wang, Jun Yu, Tongliang Liu

Then, we train a denoising model to minimize the distances between the adversarial examples and the natural examples in the class activation feature space.

Adversarial Robustness Denoising

ADD-Defense: Towards Defending Widespread Adversarial Examples via Perturbation-Invariant Representation

no code implementations1 Jan 2021 Dawei Zhou, Tongliang Liu, Bo Han, Nannan Wang, Xinbo Gao

Motivated by this observation, we propose a defense framework ADD-Defense, which extracts the invariant information called \textit{perturbation-invariant representation} (PIR) to defend against widespread adversarial examples.

Cannot find the paper you are looking for? You can Submit a new open access paper.