no code implementations • 18 Aug 2023 • Wendong Bi, Xueqi Cheng, Bingbing Xu, Xiaoqian Sun, Li Xu, HuaWei Shen
Transfer learning has been a feasible way to transfer knowledge from high-quality external data of source domains to limited data of target domains, which follows a domain-level knowledge transfer to learn a shared posterior distribution.
no code implementations • 13 Feb 2023 • Jiayan Guo, Lun Du, Wendong Bi, Qiang Fu, Xiaojun Ma, Xu Chen, Shi Han, Dongmei Zhang, Yan Zhang
To this end, we propose HDHGR, a homophily-oriented deep heterogeneous graph rewiring approach that modifies the HG structure to increase the performance of HGNN.
1 code implementation • 2 Feb 2023 • Wendong Bi, Bingbing Xu, Xiaoqian Sun, Li Xu, HuaWei Shen, Xueqi Cheng
To combat the above challenges, we propose Knowledge Transferable Graph Neural Network (KT-GNN), which models distribution shifts during message passing and representation learning by transferring knowledge from vocal nodes to silent nodes.
1 code implementation • 31 Jan 2023 • Wendong Bi, Bingbing Xu, Xiaoqian Sun, Zidong Wang, HuaWei Shen, Xueqi Cheng
However, most nodes in the tribe-style graph lack attributes, making it difficult to directly adopt existing graph learning methods (e. g., Graph Neural Networks(GNNs)).
no code implementations • 17 Sep 2022 • Wendong Bi, Lun Du, Qiang Fu, Yanlin Wang, Shi Han, Dongmei Zhang
Graph Neural Networks (GNNs) are popular machine learning methods for modeling graph data.
Ranked #6 on Node Classification on Squirrel
1 code implementation • 15 Aug 2022 • Wendong Bi, Lun Du, Qiang Fu, Yanlin Wang, Shi Han, Dongmei Zhang
Graph Neural Networks (GNNs) have shown expressive performance on graph representation learning by aggregating information from neighbors.
no code implementations • 25 Feb 2022 • Chongjian Yue, Lun Du, Qiang Fu, Wendong Bi, Hengyu Liu, Yu Gu, Di Yao
The Temporal Link Prediction task of WSDM Cup 2022 expects a single model that can work well on two kinds of temporal graphs simultaneously, which have quite different characteristics and data properties, to predict whether a link of a given type will occur between two given nodes within a given time span.
no code implementations • 25 Nov 2019 • Xiaojiang Yang, Wendong Bi, Yitong Sun, Yu Cheng, Junchi Yan
Most existing works on disentangled representation learning are solely built upon an marginal independence assumption: all factors in disentangled representations should be statistically independent.