no code implementations • 6 Sep 2024 • Yin Jin, Ningtao Wang, Ruofan Wu, Pengfei Shi, Xing Fu, Weiqiang Wang
Imbalanced data are frequently encountered in real-world classification tasks.
1 code implementation • 20 Jun 2024 • Yunfei Liu, Jintang Li, Yuehe Chen, Ruofan Wu, Ericbk Wang, Jing Zhou, Sheng Tian, Shuheng Shen, Xing Fu, Changhua Meng, Weiqiang Wang, Liang Chen
Another promising line of research involves the adoption of modularity maximization, a popular and effective measure for community detection, as the guiding principle for clustering tasks.
1 code implementation • 3 Jun 2024 • Jintang Li, Ruofan Wu, Xinzhou Jin, Boqun Ma, Liang Chen, Zibin Zheng
Recently, state space models (SSMs), which are framed as discretized representations of an underlying continuous-time linear dynamical system, have garnered substantial attention and achieved breakthrough advancements in independent sequence modeling.
no code implementations • 18 Apr 2024 • Ruofan Wu, Junmin Zhong, Jennie Si
We prove qualitative properties of PAAC for learning convergence of the value and policy, solution optimality, and stability of system dynamics.
no code implementations • 6 Feb 2024 • Ruofan Wu, Guanhua Fang, Qiying Pan, Mingyang Zhang, Tengfei Liu, Weiqiang Wang
Our research primarily addresses the theoretical underpinnings of similarity-based edge reconstruction attacks (SERA), furnishing a non-asymptotic analysis of their reconstruction capacities.
no code implementations • 28 Nov 2023 • Jintang Li, Jiawang Dan, Ruofan Wu, Jing Zhou, Sheng Tian, Yunfei Liu, Baokun Wang, Changhua Meng, Weiqiang Wang, Yuchang Zhu, Liang Chen, Zibin Zheng
Over the past few years, graph neural networks (GNNs) have become powerful and practical tools for learning on (static) graph-structure data.
no code implementations • 7 Nov 2023 • Junmin Zhong, Ruofan Wu, Jennie Si
We address the issue of estimation bias in deep reinforcement learning (DRL) by introducing solution mechanisms that include a new, twin TD-regularized actor-critic (TDR) method.
no code implementations • 31 Oct 2023 • Ruofan Wu, Mingyang Zhang, Lingjuan Lyu, Xiaolong Xu, Xiuquan Hao, Xinyi Fu, Tengfei Liu, Tianyi Zhang, Weiqiang Wang
The paradigm of vertical federated learning (VFL), where institutions collaboratively train machine learning models via combining each other's local feature or label information, has achieved great success in applications to financial risk management (FRM).
no code implementations • 18 Oct 2023 • Jintang Li, Zheng Wei, Jiawang Dan, Jing Zhou, Yuchang Zhu, Ruofan Wu, Baokun Wang, Zhang Zhen, Changhua Meng, Hong Jin, Zibin Zheng, Liang Chen
Through in-depth investigations on several real-world heterogeneous graphs exhibiting varying levels of heterophily, we have observed that heterogeneous graph neural networks (HGNNs), which inherit many mechanisms from GNNs designed for homogeneous graphs, fail to generalize to heterogeneous graphs with heterophily or low level of homophily.
no code implementations • 17 Oct 2023 • Jiawang Dan, Ruofan Wu, Yunpeng Liu, Baokun Wang, Changhua Meng, Tengfei Liu, Tianyi Zhang, Ningtao Wang, Xing Fu, Qi Li, Weiqiang Wang
Recently, the idea of designing neural models on graphs using the theory of graph kernels has emerged as a more transparent as well as sometimes more expressive alternative to MPNNs known as kernel graph neural networks (KGNNs).
no code implementations • 18 Sep 2023 • Qiying Pan, Ruofan Wu, Tengfei Liu, Tianyi Zhang, Yifei Zhu, Weiqiang Wang
Federated training of Graph Neural Networks (GNN) has become popular in recent years due to its ability to perform graph-related tasks under data isolation scenarios while preserving data privacy.
1 code implementation • 3 Jun 2023 • Jintang Li, Wangbin Sun, Ruofan Wu, Yuchang Zhu, Liang Chen, Zibin Zheng
Oversmoothing is a common phenomenon observed in graph neural networks (GNNs), in which an increase in the network depth leads to a deterioration in their performance.
1 code implementation • 30 May 2023 • Jintang Li, Huizhe Zhang, Ruofan Wu, Zulun Zhu, Baokun Wang, Changhua Meng, Zibin Zheng, Liang Chen
While contrastive self-supervised learning has become the de-facto learning paradigm for graph neural networks, the pursuit of higher task accuracy requires a larger hidden dimensionality to learn informative and discriminative full-precision representations, raising concerns about computation, memory footprint, and energy consumption burden (largely overlooked) for real-world applications.
1 code implementation • 18 May 2023 • Jintang Li, Sheng Tian, Ruofan Wu, Liang Zhu, Welong Zhao, Changhua Meng, Liang Chen, Zibin Zheng, Hongzhi Yin
We approach the problem by our proposed STEP, a self-supervised temporal pruning framework that learns to remove potentially redundant edges from input dynamic graphs.
no code implementations • 6 Apr 2023 • Yuke Hu, Wei Liang, Ruofan Wu, Kai Xiao, Weiqiang Wang, Xiaochen Li, Jinfei Liu, Zhan Qin
Knowledge Graph Embedding (KGE) is a fundamental technique that extracts expressive representation from knowledge graph (KG) to facilitate diverse downstream tasks.
no code implementations • 6 Mar 2023 • Jiafu Wu, Mufeng Yao, Dong Wu, Mingmin Chi, Baokun Wang, Ruofan Wu, Xin Fu, Changhua Meng, Weiqiang Wang
Graph representation plays an important role in the field of financial risk control, where the relationship among users can be constructed in a graph manner.
no code implementations • 4 Feb 2023 • Ruofan Wu, Boqun Ma, Hong Jin, Wenlong Zhao, Weiqiang Wang, Tianyi Zhang
The application of graph representation learning techniques to the area of financial risk management (FRM) has attracted significant attention recently.
no code implementations • 10 Oct 2022 • Junmin Zhong, Ruofan Wu, Jennie Si
However, there is a lack of comprehensive and systematic study on this important aspect to demonstrate the effectiveness of multi-step methods in solving highly complex continuous control problems.
1 code implementation • 15 Aug 2022 • Jintang Li, Zhouxin Yu, Zulun Zhu, Liang Chen, Qi Yu, Zibin Zheng, Sheng Tian, Ruofan Wu, Changhua Meng
We explore a new direction in that we can capture the evolving dynamics of temporal graphs with spiking neural networks (SNNs) instead of RNNs.
no code implementations • 21 Jun 2022 • Yan Feng, Tao Xiong, Ruofan Wu, LingJuan Lv, Leilei Shi
In addition, with fixed privacy and communication level, the performance of sqSGD significantly dominates that of various baseline algorithms.
2 code implementations • 20 May 2022 • Jintang Li, Ruofan Wu, Wangbin Sun, Liang Chen, Sheng Tian, Liang Zhu, Changhua Meng, Zibin Zheng, Weiqiang Wang
The last years have witnessed the emergence of a promising self-supervised learning strategy, referred to as masked autoencoding.
1 code implementation • 20 Apr 2022 • Jintang Li, Jie Liao, Ruofan Wu, Liang Chen, Zibin Zheng, Jiawang Dan, Changhua Meng, Weiqiang Wang
To mitigate such a threat, considerable research efforts have been devoted to increasing the robustness of GCNs against adversarial attacks.
no code implementations • 3 Jul 2021 • Hui Li, Xing Fu, Ruofan Wu, Jinyu Xu, Kai Xiao, xiaofu Chang, Weiqiang Wang, Shuai Chen, Leilei Shi, Tao Xiong, Yuan Qi
Deep learning provides a promising way to extract effective representations from raw data in an end-to-end fashion and has proven its effectiveness in various domains such as computer vision, natural language processing, etc.
no code implementations • 1 Jan 2021 • Tao Xiong, Liang Zhu, Ruofan Wu, Yuan Qi
Specifically, we allow every node in the original graph to interact with a group of memory nodes.
no code implementations • 1 Jan 2021 • Yan Feng, Tao Xiong, Ruofan Wu, Yuan Qi
We also initialize a discussion about the role of quantization and perturbation in FL algorithm design with privacy and communication constraints.
no code implementations • 31 Dec 2020 • Zhikai Yao, Jennie Si, Ruofan Wu, Jianyong Yao
Our proposed new design takes advantage of two control design frameworks: a reinforcement learning based, data-driven approach to provide the needed adaptation and (sub)optimality, and a backstepping based approach to provide closed-loop system stability framework.