1 code implementation • 14 Nov 2023 • Ed Davis, Ian Gallagher, Daniel John Lawson, Patrick Rubin-Delanchy
We propose that a wide class of established static network embedding methods can be used to produce interpretable and powerful dynamic network embeddings when they are applied to the dilated unfolded adjacency matrix.
no code implementations • NeurIPS 2023 • Alexander Modell, Ian Gallagher, Emma Ceccherini, Nick Whiteley, Patrick Rubin-Delanchy
We present a new representation learning framework, Intensity Profile Projection, for continuous-time dynamic network data.
1 code implementation • NeurIPS 2023 • Annie Gray, Alexander Modell, Patrick Rubin-Delanchy, Nick Whiteley
In this paper we offer a new perspective on the well established agglomerative clustering algorithm, focusing on recovery of hierarchical structure.
no code implementations • 27 Oct 2022 • Hannah Sansford, Alexander Modell, Nick Whiteley, Patrick Rubin-Delanchy
Recent work has shown that sparse graphs containing many triangles cannot be reproduced using a finite-dimensional representation of the nodes, in which link probabilities are inner products.
Graph Representation Learning Vocal Bursts Intensity Prediction
2 code implementations • 24 Aug 2022 • Nick Whiteley, Annie Gray, Patrick Rubin-Delanchy
The Manifold Hypothesis is a widely accepted tenet of Machine Learning which asserts that nominally high-dimensional data are in fact concentrated near a low-dimensional manifold, embedded in high-dimensional space.
1 code implementation • 8 Feb 2022 • Alexander Modell, Ian Gallagher, Joshua Cape, Patrick Rubin-Delanchy
Spectral embedding finds vector representations of the nodes of a network, based on the eigenvectors of its adjacency or Laplacian matrix, and has found applications throughout the sciences.
1 code implementation • NeurIPS 2021 • Ian Gallagher, Andrew Jones, Patrick Rubin-Delanchy
We consider the problem of embedding a dynamic network, to obtain time-evolving vector representations of each node, which can then be used to describe changes in behaviour of individual nodes, communities, or the entire graph.
1 code implementation • NeurIPS 2021 • Nick Whiteley, Annie Gray, Patrick Rubin-Delanchy
Given a graph or similarity matrix, we consider the problem of recovering a notion of true distance between the nodes, and so their true positions.
no code implementations • 3 May 2021 • Alexander Modell, Patrick Rubin-Delanchy
This paper shows that graph spectral embedding using the random walk Laplacian produces vector representations which are completely corrected for node degree.
1 code implementation • 9 Nov 2020 • Francesco Sanna Passino, Nicholas A. Heard, Patrick Rubin-Delanchy
The proposed method is based on a transformation of the spectral embedding to spherical coordinates, and a novel modelling assumption in the transformed space.
1 code implementation • 20 Jul 2020 • Andrew Jones, Patrick Rubin-Delanchy
We present a comprehensive extension of the latent position network model known as the random dot product graph to accommodate multiple graphs -- both undirected and directed -- which share a common subset of nodes, and propose a method for jointly embedding the associated adjacency matrices, or submatrices thereof, into a suitable latent space.
no code implementations • NeurIPS 2020 • Patrick Rubin-Delanchy
Statistical analysis of a graph often starts with embedding, the process of representing its nodes as points in space.
no code implementations • 12 Oct 2019 • Ian Gallagher, Andrew Jones, Anna Bertiger, Carey Priebe, Patrick Rubin-Delanchy
When analyzing weighted networks using spectral embedding, a judicious transformation of the edge weights may produce better results.
no code implementations • 16 Sep 2017 • Patrick Rubin-Delanchy, Joshua Cape, Minh Tang, Carey E. Priebe
Spectral embedding is a procedure which can be used to obtain vector representations of the nodes of a graph.