Network Embedding

153 papers with code • 0 benchmarks • 4 datasets

Network Embedding, also known as "Network Representation Learning", is a collective term for techniques for mapping graph nodes to vectors of real numbers in a multidimensional space. To be useful, a good embedding should preserve the structure of the graph. The vectors can then be used as input to various network and graph analysis tasks, such as link prediction

Source: Tutorial on NLP-Inspired Network Embedding

Libraries

Use these libraries to find Network Embedding models and implementations

Latest papers with no code

An Isolation-Aware Online Virtual Network Embedding via Deep Reinforcement Learning

no code yet • 25 Nov 2022

We define a simple abstracted concept of isolation levels to capture the variations in isolation requirements and then formulate isolation-aware VNE as an optimization problem with resource and isolation constraints.

Embedding Representation of Academic Heterogeneous Information Networks Based on Federated Learning

no code yet • 7 Oct 2022

To solve the above challenges, aiming at the data information of scientific research teams closely related to science and technology, we proposed an academic heterogeneous information network embedding representation learning method based on federated learning (FedAHE), which utilizes node attention and meta path attention mechanism to learn low-dimensional, dense and real-valued vector representations while preserving the rich topological information and meta-path-based semantic information of nodes in network.

Attributed Network Embedding Model for Exposing COVID-19 Spread Trajectory Archetypes

no code yet • 20 Sep 2022

To this end, this study creates a network embedding model capturing cross-county visitation networks, as well as heterogeneous features to uncover clusters of counties in the United States based on their pandemic spread transmission trajectories.

Associative Learning for Network Embedding

no code yet • 30 Aug 2022

The network embedding task is to represent the node in the network as a low-dimensional vector while incorporating the topological and structural information.

Signed Network Embedding with Application to Simultaneous Detection of Communities and Anomalies

no code yet • 8 Jul 2022

This paper develops a unified embedding model for signed networks to disentangle the intertwined balance structure and anomaly effect, which can greatly facilitate the downstream analysis, including community detection, anomaly detection, and network inference.

Large-Scale Privacy-Preserving Network Embedding against Private Link Inference Attacks

no code yet • 28 May 2022

Basically, we propose to perturb the original network by adding or removing links, and expect the embedding generated on the perturbed network can leak little information about private links but hold high utility for various downstream tasks.

NECA: Network-Embedded Deep Representation Learning for Categorical Data

no code yet • 25 May 2022

We propose NECA, a deep representation learning method for categorical data.

Deep Partial Multiplex Network Embedding

no code yet • 5 Mar 2022

Network embedding is an effective technique to learn the low-dimensional representations of nodes in networks.

Pay Attention to Relations: Multi-embeddings for Attributed Multiplex Networks

no code yet • 3 Mar 2022

In contrast to prior work, RAHMeN is a more expressive embedding framework that embraces the multi-faceted nature of nodes in such networks, producing a set of multi-embeddings that capture the varied and diverse contexts of nodes.

Grammar-Based Grounded Lexicon Learning

no code yet • NeurIPS 2021

We present Grammar-Based Grounded Lexicon Learning (G2L2), a lexicalist approach toward learning a compositional and grounded meaning representation of language from grounded data, such as paired images and texts.