no code implementations • 20 Jan 2022 • Yayong Li, Jie Yin, Ling Chen
It aims to augment the training set with pseudo-labeled unlabeled nodes with high confidence so as to re-train a supervised model in a self-training cycle.
no code implementations • 5 Mar 2021 • Yayong Li, Jie Yin, Ling Chen
Learning with label noise has been primarily studied in the context of image classification, but these techniques cannot be directly applied to graph-structured data, due to two major challenges -- label sparsity and label dependency -- faced by learning on graphs.
no code implementations • ICLR 2022 • Wei Huang, Yayong Li, Weitao Du, Jie Yin, Richard Yi Da Xu, Ling Chen, Miao Zhang
Inspired by our theoretical insights on trainability, we propose Critical DropEdge, a connectivity-aware and graph-adaptive sampling method, to alleviate the exponential decay problem more fundamentally.
no code implementations • 22 Aug 2019 • Yayong Li, Jie Yin, Ling Chen
In this paper, we propose a SEmi-supervised Adversarial active Learning (SEAL) framework on attributed graphs, which fully leverages the representation power of deep neural networks and devises a novel AL query strategy in an adversarial way.