Search Results for author: Dawei Zhou

Found 24 papers, 11 papers with code

CASPER: Causality-Aware Spatiotemporal Graph Neural Networks for Spatiotemporal Time Series Imputation

no code implementations18 Mar 2024 Baoyu Jing, Dawei Zhou, Kan Ren, Carl Yang

Based on the results of the frontdoor adjustment, we introduce a novel Causality-Aware SPatiotEmpoRal graph neural network (CASPER), which contains a novel Spatiotemporal Causal Attention (SCA) and a Prompt Based Decoder (PBD).

Imputation Time Series

Gradient constrained sharpness-aware prompt learning for vision-language models

no code implementations14 Sep 2023 Liangchen Liu, Nannan Wang, Dawei Zhou, Xinbo Gao, Decheng Liu, Xi Yang, Tongliang Liu

This paper targets a novel trade-off problem in generalizable prompt learning for vision-language models (VLM), i. e., improving the performance on unseen classes while maintaining the performance on seen classes.

Towards Reliable Rare Category Analysis on Graphs via Individual Calibration

1 code implementation19 Jul 2023 Longfeng Wu, Bowen Lei, Dongkuan Xu, Dawei Zhou

In particular, to quantify the uncertainties in RCA, we develop a node-level uncertainty quantification algorithm to model the overlapping support regions with high uncertainty; to handle the rarity of minority classes in miscalibration calculation, we generalize the distribution-based calibration metric to the instance level and propose the first individual calibration measurement on graphs named Expected Individual Calibration Error (EICE).

Fraud Detection Network Intrusion Detection +1

HeroLT: Benchmarking Heterogeneous Long-Tailed Learning

1 code implementation17 Jul 2023 Haohui Wang, Weijie Guan, Jianpeng Chen, Zi Wang, Dawei Zhou

To achieve this, we develop the most comprehensive (to the best of our knowledge) long-tailed learning benchmark named HeroLT, which integrates 13 state-of-the-art algorithms and 6 evaluation metrics on 14 real-world benchmark datasets across 4 tasks from 3 domains.

Benchmarking

GPatcher: A Simple and Adaptive MLP Model for Alleviating Graph Heterophily

no code implementations25 Jun 2023 Shuaicheng Zhang, Haohui Wang, Si Zhang, Dawei Zhou

While graph heterophily has been extensively studied in recent years, a fundamental research question largely remains nascent: How and to what extent will graph heterophily affect the prediction performance of graph neural networks (GNNs)?

Node Classification

Characterizing Long-Tail Categories on Graphs

no code implementations17 May 2023 Haohui Wang, Baoyu Jing, Kaize Ding, Yada Zhu, Liqing Zhang, Dawei Zhou

However, there is limited literature that provides a theoretical tool to characterize the behaviors of long-tail categories on graphs and understand the generalization performance in real scenarios.

Contrastive Learning Multi-Task Learning

Dynamic Transfer Learning across Graphs

no code implementations1 May 2023 Haohui Wang, Yuzhen Mao, Jianhui Sun, Si Zhang, Yonghui Fan, Dawei Zhou

Transferring knowledge across graphs plays a pivotal role in many high-stake domains, ranging from transportation networks to e-commerce networks, from neuroscience to finance.

Transfer Learning

Personalized Federated Learning under Mixture of Distributions

1 code implementation1 May 2023 Yue Wu, Shuaicheng Zhang, Wenchao Yu, Yanchi Liu, Quanquan Gu, Dawei Zhou, Haifeng Chen, Wei Cheng

The recent trend towards Personalized Federated Learning (PFL) has garnered significant attention as it allows for the training of models that are tailored to each client while maintaining data privacy.

Personalized Federated Learning Uncertainty Quantification

FairGen: Towards Fair Graph Generation

no code implementations30 Mar 2023 Lecheng Zheng, Dawei Zhou, Hanghang Tong, Jiejun Xu, Yada Zhu, Jingrui He

In addition, we propose a generic context sampling strategy for graph generative models, which is proven to be capable of fairly capturing the contextual information of each group with a high probability.

Data Augmentation Fairness +3

Towards High-Order Complementary Recommendation via Logical Reasoning Network

1 code implementation9 Dec 2022 Longfeng Wu, Yao Zhou, Dawei Zhou

Finally, we further propose a hybrid network that is jointly optimized for learning a more generic product representation.

Logical Reasoning Negation +2

Augmenting Knowledge Transfer across Graphs

1 code implementation9 Dec 2022 Yuzhen Mao, Jianhui Sun, Dawei Zhou

Given a resource-rich source graph and a resource-scarce target graph, how can we effectively transfer knowledge across graphs and ensure a good generalization performance?

Domain Adaptation Transfer Learning

Towards Accurate Subgraph Similarity Computation via Neural Graph Pruning

1 code implementation19 Oct 2022 Linfeng Liu, Xu Han, Dawei Zhou, Li-Ping Liu

In this work, we convert graph pruning to a problem of node relabeling and then relax it to a differentiable problem.

Strength-Adaptive Adversarial Training

no code implementations4 Oct 2022 Chaojian Yu, Dawei Zhou, Li Shen, Jun Yu, Bo Han, Mingming Gong, Nannan Wang, Tongliang Liu

Firstly, applying a pre-specified perturbation budget on networks of various model capacities will yield divergent degree of robustness disparity between natural and robust accuracies, which deviates from robust network's desideratum.

Adversarial Robustness Scheduling

Hiding Visual Information via Obfuscating Adversarial Perturbations

1 code implementation ICCV 2023 Zhigang Su, Dawei Zhou, Nannan Wangu, Decheng Li, Zhen Wang, Xinbo Gao

Growing leakage and misuse of visual information raise security and privacy concerns, which promotes the development of information protection.

Adversarial Attack De-identification +1

MentorGNN: Deriving Curriculum for Pre-Training GNNs

1 code implementation21 Aug 2022 Dawei Zhou, Lecheng Zheng, Dongqi Fu, Jiawei Han, Jingrui He

To comprehend heterogeneous graph signals at different granularities, we propose a curriculum learning paradigm that automatically re-weighs graph signals in order to ensure a good generalization in the target domain.

Domain Adaptation Graph Mining

Improving Adversarial Robustness via Mutual Information Estimation

1 code implementation25 Jul 2022 Dawei Zhou, Nannan Wang, Xinbo Gao, Bo Han, Xiaoyu Wang, Yibing Zhan, Tongliang Liu

To alleviate this negative effect, in this paper, we investigate the dependence between outputs of the target model and input adversarial samples from the perspective of information theory, and propose an adversarial defense method.

Adversarial Defense Adversarial Robustness +1

Modeling Adversarial Noise for Adversarial Defense

no code implementations29 Sep 2021 Dawei Zhou, Nannan Wang, Bo Han, Tongliang Liu

Deep neural networks have been demonstrated to be vulnerable to adversarial noise, promoting the development of defense against adversarial attacks.

Adversarial Defense

Modeling Adversarial Noise for Adversarial Training

1 code implementation21 Sep 2021 Dawei Zhou, Nannan Wang, Bo Han, Tongliang Liu

Deep neural networks have been demonstrated to be vulnerable to adversarial noise, promoting the development of defense against adversarial attacks.

Adversarial Defense

Improving White-box Robustness of Pre-processing Defenses via Joint Adversarial Training

no code implementations10 Jun 2021 Dawei Zhou, Nannan Wang, Xinbo Gao, Bo Han, Jun Yu, Xiaoyu Wang, Tongliang Liu

However, pre-processing methods may suffer from the robustness degradation effect, in which the defense reduces rather than improving the adversarial robustness of a target model in a white-box setting.

Adversarial Defense Adversarial Robustness

Towards Defending against Adversarial Examples via Attack-Invariant Features

no code implementations9 Jun 2021 Dawei Zhou, Tongliang Liu, Bo Han, Nannan Wang, Chunlei Peng, Xinbo Gao

However, given the continuously evolving attacks, models trained on seen types of adversarial examples generally cannot generalize well to unseen types of adversarial examples.

Adversarial Robustness

Removing Adversarial Noise in Class Activation Feature Space

no code implementations ICCV 2021 Dawei Zhou, Nannan Wang, Chunlei Peng, Xinbo Gao, Xiaoyu Wang, Jun Yu, Tongliang Liu

Then, we train a denoising model to minimize the distances between the adversarial examples and the natural examples in the class activation feature space.

Adversarial Robustness Denoising

ADD-Defense: Towards Defending Widespread Adversarial Examples via Perturbation-Invariant Representation

no code implementations1 Jan 2021 Dawei Zhou, Tongliang Liu, Bo Han, Nannan Wang, Xinbo Gao

Motivated by this observation, we propose a defense framework ADD-Defense, which extracts the invariant information called \textit{perturbation-invariant representation} (PIR) to defend against widespread adversarial examples.

Cannot find the paper you are looking for? You can Submit a new open access paper.