Search Results for author: Haohui Wang

Found 5 papers, 2 papers with code

HeroLT: Benchmarking Heterogeneous Long-Tailed Learning

1 code implementation17 Jul 2023 Haohui Wang, Weijie Guan, Jianpeng Chen, Zi Wang, Dawei Zhou

To achieve this, we develop the most comprehensive (to the best of our knowledge) long-tailed learning benchmark named HeroLT, which integrates 13 state-of-the-art algorithms and 6 evaluation metrics on 14 real-world benchmark datasets across 4 tasks from 3 domains.

Benchmarking

GPatcher: A Simple and Adaptive MLP Model for Alleviating Graph Heterophily

no code implementations25 Jun 2023 Shuaicheng Zhang, Haohui Wang, Si Zhang, Dawei Zhou

While graph heterophily has been extensively studied in recent years, a fundamental research question largely remains nascent: How and to what extent will graph heterophily affect the prediction performance of graph neural networks (GNNs)?

Node Classification

Characterizing Long-Tail Categories on Graphs

no code implementations17 May 2023 Haohui Wang, Baoyu Jing, Kaize Ding, Yada Zhu, Liqing Zhang, Dawei Zhou

However, there is limited literature that provides a theoretical tool to characterize the behaviors of long-tail categories on graphs and understand the generalization performance in real scenarios.

Contrastive Learning Multi-Task Learning

Dynamic Transfer Learning across Graphs

no code implementations1 May 2023 Haohui Wang, Yuzhen Mao, Jianhui Sun, Si Zhang, Yonghui Fan, Dawei Zhou

Transferring knowledge across graphs plays a pivotal role in many high-stake domains, ranging from transportation networks to e-commerce networks, from neuroscience to finance.

Transfer Learning

A Benchmark for Federated Hetero-Task Learning

1 code implementation7 Jun 2022 Liuyi Yao, Dawei Gao, Zhen Wang, Yuexiang Xie, Weirui Kuang, Daoyuan Chen, Haohui Wang, Chenhe Dong, Bolin Ding, Yaliang Li

To investigate the heterogeneity in federated learning in real-world scenarios, we generalize the classic federated learning to federated hetero-task learning, which emphasizes the inconsistency across the participants in federated learning in terms of both data distribution and learning tasks.

Federated Learning Meta-Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.