1 code implementation • 14 Mar 2025 • Yuanshuo Zhang, Yuchen Hou, Bohan Tang, Shuo Chen, Muhan Zhang, Xiaowen Dong, Siheng Chen
Agentic workflows invoked by Large Language Models (LLMs) have achieved remarkable success in handling complex tasks.
no code implementations • 8 Mar 2025 • Haotong Yang, Qingyuan Zheng, Yunjian Gao, Yongkun Yang, Yangbo He, Zhouchen Lin, Muhan Zhang
With the rapid advancement of text-conditioned Video Generation Models (VGMs), the quality of generated videos has significantly improved, bringing these models closer to functioning as ``*world simulators*'' and making real-world-level video generation more accessible and cost-effective.
no code implementations • 20 Feb 2025 • Yansheng Mao, Yufei Xu, Jiaqi Li, Fanxu Meng, Haotong Yang, Zilong Zheng, Xiyuan Wang, Muhan Zhang
This paper presents Long Input Fine-Tuning (LIFT), a novel framework for long-context modeling that can improve the long-context performance of arbitrary (short-context) LLMs by dynamically adapting model parameters based on the long input.
no code implementations • 17 Feb 2025 • Yi Hu, Shijia Kang, Haotong Yang, Haotian Xu, Muhan Zhang
As a result, while LLMs can often recall rules with ease, they fail to apply these rules strictly and consistently in relevant reasoning scenarios.
1 code implementation • 11 Feb 2025 • Fanxu Meng, Zengwei Yao, Muhan Zhang
In this paper, we show that GQA can always be represented by MLA while maintaining the same KV cache overhead, but the converse does not hold.
no code implementations • 4 Feb 2025 • Xiyuan Wang, Muhan Zhang
Recent advances in Graph Neural Networks (GNNs) have explored the potential of random noise as an input feature to enhance expressivity across diverse tasks.
no code implementations • 4 Feb 2025 • Xiyuan Wang, Yewei Liu, Lexi Pang, Siwei Chen, Muhan Zhang
Diffusion models have gained popularity in graph generation tasks; however, the extent of their expressivity concerning the graph distributions they can learn is not fully understood.
no code implementations • 20 Jan 2025 • Haotian Xu, Xing Wu, Weinong Wang, Zhongzhi Li, Da Zheng, Boyuan Chen, Yi Hu, Shijia Kang, Jiaming Ji, Yingying Zhang, Zhijiang Guo, Yaodong Yang, Muhan Zhang, Debing Zhang
In this work, we explore the untapped potential of scaling Long Chain-of-Thought (Long-CoT) data to 1000k samples, pioneering the development of a slow-thinking model, RedStar.
no code implementations • 24 Dec 2024 • Qian Tao, Xiyuan Wang, Muhan Zhang, Shuxian Hu, Wenyuan Yu, Jingren Zhou
Many recent studies have proposed the use of graph convolution methods over the numerous subgraphs of each graph, a concept known as subgraph graph neural networks (subgraph GNNs), to enhance GNNs' ability to distinguish non-isomorphic graphs.
no code implementations • 18 Dec 2024 • Yansheng Mao, Jiaqi Li, Fanxu Meng, Jing Xiong, Zilong Zheng, Muhan Zhang
Long context understanding remains challenging for large language models due to their limited context windows.
no code implementations • 8 Dec 2024 • Haotong Yang, Xiyuan Wang, Qian Tao, Shuxian Hu, Zhouchen Lin, Muhan Zhang
Recent research on integrating Large Language Models (LLMs) with Graph Neural Networks (GNNs) typically follows two approaches: LLM-centered models, which convert graph data into tokens for LLM processing, and GNN-centered models, which use LLMs to encode text features into node and edge representations for GNN input.
1 code implementation • 26 Nov 2024 • Fanxu Meng, Pingzhi Tang, Fan Jiang, Muhan Zhang
For instance, the perplexity of pruning 70\% of the \( Q \)-\( K \) pairs in GPT-2 XL is similar to that of pruning just 8\% with vanilla methods.
2 code implementations • 6 Nov 2024 • Weishuo Ma, Yanbo Wang, Xiyuan Wang, Muhan Zhang
Various graph neural networks (GNNs) with advanced training techniques and model designs have been proposed for link prediction tasks.
Ranked #1 on
Link Property Prediction
on ogbl-ppa
1 code implementation • 6 Nov 2024 • Haotong Yang, Yi Hu, Shijia Kang, Zhouchen Lin, Muhan Zhang
We also finetune practical-scale LLMs on our proposed NUPA tasks and find that 1) naive finetuning can improve NUPA a lot on many but not all tasks, and 2) surprisingly, techniques designed to enhance NUPA prove ineffective for finetuning pretrained models.
no code implementations • 13 Oct 2024 • Junru Zhou, Cai Zhou, Xiyuan Wang, Pan Li, Muhan Zhang
Graph neural networks (GNNs) have achieved remarkable success in a variety of machine learning tasks over graph data.
no code implementations • 10 Oct 2024 • Xiaojuan Tang, Jiaqi Li, Yitao Liang, Song-Chun Zhu, Muhan Zhang, Zilong Zheng
In this paper, we design Mars, an interactive environment devised for situated inductive reasoning.
no code implementations • 4 Oct 2024 • Zian Li, Cai Zhou, Xiyuan Wang, Xingang Peng, Muhan Zhang
Compared to directly generating a molecule, the relatively easy-to-generate representation in the first-stage guides the second-stage generation to reach a high-quality molecule in a more goal-oriented and much faster way.
1 code implementation • 4 Oct 2024 • Junru Zhou, Muhan Zhang
The ability of graph neural networks (GNNs) to count homomorphisms has recently been proposed as a practical and fine-grained measure of their expressive power.
1 code implementation • 21 Sep 2024 • Muhan Zhang
For example, multiset {1, 2, 3, 2} is equivalent to multiset {a, b, c, b} if we specify an injective transformation that maps 1 to a, 2 to b and 3 to c. We study the sufficient and necessary conditions for a most expressive lexical invariant (and permutation invariant) function on multisets and graphs, and proves that for multisets, the function must have a form that only takes the multiset of counts of the unique elements in the original multiset as input.
1 code implementation • 12 Jul 2024 • Lecheng Kong, Jiarui Feng, Hao liu, Chengsong Huang, Jiaxin Huang, Yixin Chen, Muhan Zhang
For example, current attempts at designing general graph models either transform graph data into a language format for LLM-based prediction or still train a GNN model with LLM as an assistant.
no code implementations • 3 Jul 2024 • Yu Huang, Min Zhou, Menglin Yang, Zhen Wang, Muhan Zhang, Jie Wang, Hong Xie, Hao Wang, Defu Lian, Enhong Chen
Recent advancements in graph learning have revolutionized the way to understand and analyze data with complex structures.
1 code implementation • 20 Jun 2024 • Jiarui Feng, Hao liu, Lecheng Kong, Mingfang Zhu, Yixin Chen, Muhan Zhang
In TAGLAS, we collect and integrate more than 23 TAG datasets with domains ranging from citation graphs to molecule graphs and tasks from node classification to graph question-answering.
1 code implementation • 12 Jun 2024 • Xiaohui Zhang, Yanbo Wang, Xiyuan Wang, Muhan Zhang
However, such methods focus on learning individual node representations, but overlook the pairwise representation learning nature of link prediction and fail to capture the important pairwise features of links such as common neighbors (CN).
1 code implementation • 5 May 2024 • Xiyuan Wang, Pan Li, Muhan Zhang
In contrast, this paper introduces a novel graph-to-set conversion method that bijectively transforms interconnected nodes into a set of independent points and then uses a set encoder to learn the graph representation.
1 code implementation • 28 Apr 2024 • Minjie Wang, Quan Gan, David Wipf, Zhenkun Cai, Ning li, Jianheng Tang, Yanlin Zhang, Zizhao Zhang, Zunyao Mao, Yakun Song, Yanbo Wang, Jiahang Li, Han Zhang, Guang Yang, Xiao Qin, Chuan Lei, Muhan Zhang, Weinan Zhang, Christos Faloutsos, Zheng Zhang
Although RDBs store vast amounts of rich, informative data spread across interconnected tables, the progress of predictive machine learning models as applied to such tasks arguably falls well behind advances in other domains such as computer vision or natural language processing.
no code implementations • 21 Apr 2024 • Zehao Dong, Muhan Zhang, Yixin Chen
We propose a novel Subgraph Pattern GNN (SPGNN) architecture that incorporates these enhancements.
1 code implementation • 3 Apr 2024 • Fanxu Meng, Zhaohui Wang, Muhan Zhang
PiSSA shares the same architecture as LoRA, but initializes the adaptor matrices $A$ and $B$ with the principal components of the original matrix $W$, and put the remaining components into a residual matrix $W^{res} \in \mathbb{R}^{m \times n}$ which is frozen during fine-tuning.
1 code implementation • 27 Feb 2024 • Yi Hu, Xiaojuan Tang, Haotong Yang, Muhan Zhang
Through carefully designed intervention experiments on five math tasks, we confirm that transformers are performing case-based reasoning, no matter whether scratchpad is used, which aligns with the previous observations that transformers use subgraph matching/shortcut learning to reason.
no code implementations • 11 Feb 2024 • Zehao Dong, Qihang Zhao, Philip R. O. Payne, Michael A Province, Carlos Cruchaga, Muhan Zhang, Tianyu Zhao, Yixin Chen, Fuhai Li
However, we found two major limitations of existing GNNs in omics data analysis, i. e., limited-prediction (diagnosis) accuracy and limited-reproducible biomarker identification capacity across multiple datasets.
1 code implementation • 7 Feb 2024 • Zian Li, Xiyuan Wang, Shijia Kang, Muhan Zhang
We then show that GeoNGNN, the geometric counterpart of one of the simplest subgraph graph neural networks (subgraph GNNs), can effectively break these corner cases' symmetry and thus achieve E(3)-completeness.
1 code implementation • 4 Feb 2024 • Cai Zhou, Xiyuan Wang, Muhan Zhang
Leveraging LGD and the ``all tasks as generation'' formulation, our framework is capable of solving graph tasks of various levels and types.
1 code implementation • 28 Nov 2023 • Xiyuan Wang, Muhan Zhang
We introduce PyTorch Geometric High Order (PyGHO), a library for High Order Graph Neural Networks (HOGNNs) that extends PyTorch Geometric (PyG).
1 code implementation • 9 Nov 2023 • Fanxu Meng, Haotong Yang, Yiding Wang, Muhan Zhang
The human brain is naturally equipped to comprehend and interpret visual information rapidly.
1 code implementation • 8 Nov 2023 • Jiaqi Li, Mengmeng Wang, Zilong Zheng, Muhan Zhang
In this paper, we present LooGLE, a Long Context Generic Language Evaluation benchmark for LLMs' long context understanding.
1 code implementation • NeurIPS 2023 • Cai Zhou, Xiyuan Wang, Muhan Zhang
Second, on $1$-simplices or edge level, we bridge edge-level random walk and Hodge $1$-Laplacians and design corresponding edge PE respectively.
1 code implementation • 17 Oct 2023 • Muhan Zhang
In the realm of deep learning, the self-attention mechanism has substantiated its pivotal role across a myriad of tasks, encompassing natural language processing and computer vision.
no code implementations • 9 Oct 2023 • Haotong Yang, Fanxu Meng, Zhouchen Lin, Muhan Zhang
Furthermore, by generalizing this structure to the hierarchical case, we demonstrate that models can achieve task composition, further reducing the space needed to learn from linear to logarithmic, thereby effectively learning on complex reasoning involving multiple steps.
2 code implementations • 4 Oct 2023 • Yinan Huang, William Lu, Joshua Robinson, Yu Yang, Muhan Zhang, Stefanie Jegelka, Pan Li
Despite many attempts to address non-uniqueness, most methods overlook stability, leading to poor generalization on unseen graph structures.
Molecular Property Prediction
Out-of-Distribution Generalization
+1
1 code implementation • 29 Sep 2023 • Hao liu, Jiarui Feng, Lecheng Kong, Ningyue Liang, DaCheng Tao, Yixin Chen, Muhan Zhang
For in-context learning on graphs, OFA introduces a novel graph prompting paradigm that appends prompting substructures to the input graph, which enables it to address varied tasks without fine-tuning.
1 code implementation • 19 Sep 2023 • Hao liu, Jiarui Feng, Lecheng Kong, DaCheng Tao, Yixin Chen, Muhan Zhang
In our study, we first identify two crucial advantages of contrastive learning compared to meta learning, including (1) the comprehensive utilization of graph nodes and (2) the power of graph augmentations.
1 code implementation • NeurIPS 2023 • Junru Zhou, Jiarui Feng, Xiyuan Wang, Muhan Zhang
Many of the proposed GNN models with provable cycle counting power are based on subgraph GNNs, i. e., extracting a bag of subgraphs from the input graph, generating representations for each subgraph, and using them to augment the representation of the input graph.
no code implementations • 1 Sep 2023 • Zehao Dong, Muhan Zhang, Philip R. O. Payne, Michael A Province, Carlos Cruchaga, Tianyu Zhao, Fuhai Li, Yixin Chen
We theoretically reveal the trade-off of expressivity and stability in graph-canonization-enhanced GNNs.
1 code implementation • 31 Aug 2023 • Zehao Dong, Weidong Cao, Muhan Zhang, DaCheng Tao, Yixin Chen, Xuan Zhang
The electronic design automation of analog circuits has been a longstanding challenge in the integrated circuit field due to the huge design space and complex design trade-offs among circuit specifications.
1 code implementation • 4 Aug 2023 • Ling Yang, Ye Tian, Minkai Xu, Zhongyi Liu, Shenda Hong, Wei Qu, Wentao Zhang, Bin Cui, Muhan Zhang, Jure Leskovec
To address this issue, we propose to learn a new powerful graph representation space by directly labeling nodes' diverse local structures for GNN-to-MLP distillation.
1 code implementation • NeurIPS 2023 • Jiarui Feng, Lecheng Kong, Hao liu, DaCheng Tao, Fuhai Li, Muhan Zhang, Yixin Chen
We theoretically prove that even if we fix the space complexity to $O(n^k)$ (for any $k\geq 2$) in $(k, t)$-FWL, we can construct an expressiveness hierarchy up to solving the graph isomorphism problem.
Ranked #3 on
Graph Regression
on ZINC-500k
no code implementations • 29 May 2023 • Yi Hu, Haotong Yang, Zhouchen Lin, Muhan Zhang
We also consider the ensemble of code prompting and CoT prompting to combine the strengths of both.
1 code implementation • 24 May 2023 • Xiaojuan Tang, Zilong Zheng, Jiaqi Li, Fanxu Meng, Song-Chun Zhu, Yitao Liang, Muhan Zhang
On the whole, our analysis provides a novel perspective on the role of semantics in developing and evaluating language models' reasoning abilities.
1 code implementation • 8 May 2023 • Cai Zhou, Xiyuan Wang, Muhan Zhang
Relational pooling is a framework for building more expressive and permutation-invariant graph neural networks.
no code implementations • 20 Apr 2023 • Xiyuan Wang, Pan Li, Muhan Zhang
When we want to learn a node-set representation involving multiple nodes, a common practice in previous works is to directly aggregate the single-node representations obtained by a GNN.
2 code implementations • 16 Apr 2023 • Yanbo Wang, Muhan Zhang
To address these limitations, we study the realized expressive power that a practical model instance can achieve using a novel expressiveness dataset, BREC, which poses greater difficulty (with up to 4-WL-indistinguishable graphs), finer granularity (enabling comparison of models between 1-WL and 3-WL), a larger scale (consisting of 800 1-WL-indistinguishable graphs that are non-isomorphic to each other).
1 code implementation • 19 Mar 2023 • Zuoyu Yan, Junru Zhou, Liangcai Gao, Zhi Tang, Muhan Zhang
We investigate the enhancement of graph neural networks' (GNNs) representation power through their ability in substructure counting.
1 code implementation • 6 Mar 2023 • Haoteng Yin, Muhan Zhang, Jianguo Wang, Pan Li
Subgraph-based graph representation learning (SGRL) has recently emerged as a powerful tool in many prediction tasks on graphs due to its advantages in model expressiveness and generalization ability.
Ranked #1 on
Link Property Prediction
on ogbl-ppa
no code implementations • 5 Mar 2023 • Hao liu, Muhan Zhang, Zehao Dong, Lecheng Kong, Yixin Chen, Bradley Fritz, DaCheng Tao, Christopher King
We view time-associated disease prediction as classification tasks at multiple time points.
2 code implementations • NeurIPS 2023 • Zian Li, Xiyuan Wang, Yinan Huang, Muhan Zhang
In this work, we first construct families of novel and symmetric geometric graphs that Vanilla DisGNN cannot distinguish even when considering all-pair distances, which greatly expands the existing counterexample families.
1 code implementation • 2 Feb 2023 • Xiyuan Wang, Haotong Yang, Muhan Zhang
In this work, we propose a novel link prediction model and further boost it by studying graph incompleteness.
Ranked #1 on
Link Property Prediction
on ogbl-ddi
1 code implementation • 24 Oct 2022 • Xiaojuan Tang, Song-Chun Zhu, Yitao Liang, Muhan Zhang
In this paper, we propose a novel and principled framework called \textbf{RulE} (stands for {Rul}e {E}mbedding) to effectively leverage logical rules to enhance KG reasoning.
2 code implementations • 22 Oct 2022 • Yinan Huang, Xingang Peng, Jianzhu Ma, Muhan Zhang
To the best of our knowledge, it is the first linear-time GNN model that can count 6-cycles with theoretical guarantees.
no code implementations • 7 Oct 2022 • Hao Wang, WanYu Lin, Hao He, Di Wang, Chengzhi Mao, Muhan Zhang
Recent years have seen advances on principles and guidance relating to accountable and ethical use of artificial intelligence (AI) spring up around the globe.
1 code implementation • 6 Oct 2022 • Lecheng Kong, Yixin Chen, Muhan Zhang
The GNN embeddings of nodes on the shortest paths are used to generate geodesic representations.
1 code implementation • 19 Sep 2022 • Haotong Yang, Zhouchen Lin, Muhan Zhang
However, evaluation of knowledge graph completion (KGC) models often ignores the incompleteness -- facts in the test set are ranked against all unknown triplets which may contain a large number of missing facts not included in the KG yet.
1 code implementation • 1 Aug 2022 • Xiyuan Wang, Muhan Zhang
Projected onto a frame, equivariant features like 3D coordinates are converted to invariant features, so that we can capture geometric information with these projections and decouple the symmetry requirement from GNN design.
1 code implementation • 20 Jun 2022 • Yang Hu, Xiyuan Wang, Zhouchen Lin, Pan Li, Muhan Zhang
As pointed out by previous works, this two-step procedure results in low discriminating power, as 1-WL-GNNs by nature learn node-level representations instead of link-level.
1 code implementation • 26 May 2022 • Jiarui Feng, Yixin Chen, Fuhai Li, Anindya Sarkar, Muhan Zhang
Recently, researchers extended 1-hop message passing to K-hop message passing by aggregating information from K-hop neighbors of nodes simultaneously.
2 code implementations • 23 May 2022 • Xiyuan Wang, Muhan Zhang
We also establish a connection between the expressive power of spectral GNNs and Graph Isomorphism (GI) testing, the latter of which is often used to characterize spatial GNNs' expressive power.
1 code implementation • 15 May 2022 • Yinan Huang, Xingang Peng, Jianzhu Ma, Muhan Zhang
The main computational challenges include: 1) the generation of linkers is conditional on the two given molecules, in contrast to generating full molecules from scratch in previous works; 2) linkers heavily depend on the anchor atoms of the two molecules to be connected, which are not known beforehand; 3) 3D structures and orientations of the molecules need to be considered to avoid atom clashes, for which equivariance to E(3) group are necessary.
1 code implementation • 19 Mar 2022 • Zehao Dong, Muhan Zhang, Fuhai Li, Yixin Chen
In this work, we propose a Parallelizable Attention-based Computation structure Encoder (PACE) that processes nodes simultaneously and encodes DAGs in parallel.
1 code implementation • ICLR 2022 • Haorui Wang, Haoteng Yin, Muhan Zhang, Pan Li
Graph neural networks (GNN) have shown great advantages in many graph-based learning tasks but often fail to predict accurately for a task-based on sets of nodes such as link/motif prediction and so on.
3 code implementations • 28 Feb 2022 • Haoteng Yin, Muhan Zhang, Yanbang Wang, Jianguo Wang, Pan Li
Subgraph-based graph representation learning (SGRL) has been recently proposed to deal with some fundamental challenges encountered by canonical graph neural networks (GNNs), and has demonstrated advantages in many important data science applications such as link, relation and motif prediction.
Ranked #1 on
Link Property Prediction
on ogbl-citation2
1 code implementation • NeurIPS 2021 • Hanqing Zeng, Muhan Zhang, Yinglong Xia, Ajitesh Srivastava, Andrey Malevich, Rajgopal Kannan, Viktor Prasanna, Long Jin, Ren Chen
We propose a design principle to decouple the depth and scope of GNNs -- to generate representation of a target entity (i. e., a node or an edge), we first extract a localized subgraph as the bounded-size scope, and then apply a GNN of arbitrary depth on top of the subgraph.
Ranked #3 on
Node Classification
on Reddit
no code implementations • 23 Nov 2021 • Xiang Song, Runjie Ma, Jiahang Li, Muhan Zhang, David Paul Wipf
However, wider hidden layers can easily lead to overfitting, and incrementally adding more GNN layers can potentially result in over-smoothing. In this paper, we present a model-agnostic methodology, namely Network In Graph Neural Network (NGNN ), that allows arbitrary GNN models to increase their model capacity by making the model deeper.
Ranked #1 on
Link Property Prediction
on ogbl-citation2
2 code implementations • NeurIPS 2021 • Muhan Zhang, Pan Li
The key is to make each node representation encode a subgraph around it more than a subtree.
Ranked #10 on
Graph Property Prediction
on ogbg-molpcba
no code implementations • ICLR 2022 • Xiyuan Wang, Muhan Zhang
And training a GLASS model only takes 28% time needed for a SubGNN on average.
no code implementations • 8 Jun 2021 • Changlin Wan, Muhan Zhang, Wei Hao, Sha Cao, Pan Li, Chi Zhang
SNALS captures the joint interactions of a hyperedge by its local environment, which is retrieved by collecting the spectrum information of their connections.
no code implementations • 1 Jan 2021 • Xinshi Chen, Yan Zhu, Haowen Xu, Muhan Zhang, Liang Xiong, Le Song
We propose a surprisingly simple but effective two-time-scale (2TS) model for learning user representations for recommendation.
2 code implementations • 2 Dec 2020 • Hanqing Zeng, Muhan Zhang, Yinglong Xia, Ajitesh Srivastava, Andrey Malevich, Rajgopal Kannan, Viktor Prasanna, Long Jin, Ren Chen
We propose a simple "deep GNN, shallow sampler" design principle to improve both the GNN accuracy and efficiency -- to generate representation of a target node, we use a deep GNN to pass messages only within a shallow, localized subgraph.
2 code implementations • NeurIPS 2021 • Muhan Zhang, Pan Li, Yinglong Xia, Kai Wang, Long Jin
In this paper, we provide a theory of using graph neural networks (GNNs) for multi-node representation learning (where we are interested in learning a representation for a set of more than one node, such as link).
Ranked #1 on
Link Property Prediction
on ogbl-citation2
no code implementations • 28 Sep 2020 • Muhan Zhang, Pan Li, Yinglong Xia, Kai Wang, Long Jin
Graph neural networks (GNNs) have achieved great success in recent years.
no code implementations • 29 Jul 2020 • Xiaoxiao Li, Yuan Zhou, Nicha C. Dvornek, Muhan Zhang, Juntang Zhuang, Pamela Ventola, James S. Duncan
We propose an interpretable GNN framework with a novel salient region selection mechanism to determine neurological brain biomarkers associated with disorders.
3 code implementations • ICLR 2020 • Muhan Zhang, Yixin Chen
Under the extreme setting where not any side information is available other than the matrix to complete, can we still learn an inductive matrix completion model?
Ranked #1 on
Recommendation Systems
on Flixster Monti
2 code implementations • NeurIPS 2019 • Muhan Zhang, Shali Jiang, Zhicheng Cui, Roman Garnett, Yixin Chen
Graph structured data are abundant in the real world.
2 code implementations • AAAI-18 2018 • Muhan Zhang, Zhicheng Cui, Marion Neumann, Yixin Chen
Neural networks are typically designed to deal with data in tensor forms.
Ranked #18 on
Graph Classification
on D&D
10 code implementations • NeurIPS 2018 • Muhan Zhang, Yixin Chen
The theory unifies a wide range of heuristics in a single framework, and proves that all these heuristics can be well approximated from local subgraphs.
Ranked #1 on
Link Prediction
on USAir
1 code implementation • KDD 2017 • Muhan Zhang, Yixin Chen
Compared with traditional link prediction methods, Wlnm does not assume a particular link formation mechanism (such as common neighbors), but learns this mechanism from the graph itself.