no code implementations • 29 Apr 2020 • Keith Levin, Carey E. Priebe, Vince Lyzinski
In this paper, we explore, both theoretically and practically, the dual roles of content (i. e., edge and vertex attributes) and context (i. e., network topology) in vertex nomination.
no code implementations • 29 Sep 2019 • Keith Levin, Fred Roosta, Minh Tang, Michael W. Mahoney, Carey E. Priebe
In both cases, we prove that when the underlying graph is generated according to a latent space model called the random dot product graph, which includes the popular stochastic block model as a special case, an out-of-sample extension based on a least-squares objective obeys a central limit theorem about the true latent position of the out-of-sample vertex.
no code implementations • 13 Jun 2019 • Keith Levin, Asad Lodhia, Elizaveta Levina
In increasingly many settings, data sets consist of multiple samples from a population of networks, with vertices aligned across these networks.
no code implementations • ICML 2018 • Keith Levin, Farbod Roosta-Khorasani, Michael W. Mahoney, Carey E. Priebe
Many popular dimensionality reduction procedures have out-of-sample extensions, which allow a practitioner to apply a learned embedding to observations not seen in the initial training sample.
no code implementations • 14 Feb 2018 • Jordan Yoder, Li Chen, Henry Pao, Eric Bridgeford, Keith Levin, Donniell Fishkind, Carey Priebe, Vince Lyzinski
There are vertex nomination schemes in the literature, including the optimally precise canonical nomination scheme~$\mathcal{L}^C$ and the consistent spectral partitioning nomination scheme~$\mathcal{L}^P$.
no code implementations • 15 Nov 2017 • Vince Lyzinski, Keith Levin, Carey E. Priebe
Given a vertex of interest in a network $G_1$, the vertex nomination problem seeks to find the corresponding vertex of interest (if it exists) in a second network $G_2$.
no code implementations • 16 Sep 2017 • Avanti Athreya, Donniell E. Fishkind, Keith Levin, Vince Lyzinski, Youngser Park, Yichen Qin, Daniel L. Sussman, Minh Tang, Joshua T. Vogelstein, Carey E. Priebe
In this survey paper, we describe a comprehensive paradigm for statistical inference on random dot product graphs, a paradigm centered on spectral embeddings of adjacency and Laplacian matrices.
1 code implementation • 12 Jun 2017 • Shane Settle, Keith Levin, Herman Kamper, Karen Livescu
Query-by-example search often uses dynamic time warping (DTW) for comparing queries and proposed matching segments.
no code implementations • 5 Jul 2016 • Vince Lyzinski, Keith Levin, Donniell E. Fishkind, Carey E. Priebe
Given a graph in which a few vertices are deemed interesting a priori, the vertex nomination task is to order the remaining vertices into a nomination list such that there is a concentration of interesting vertices at the top of the list.
no code implementations • 12 Mar 2016 • Keith Levin, Vince Lyzinski
In particular, we consider Laplacian eigenmaps embeddings based on a kernel matrix, and explore how the embeddings behave when this kernel matrix is corrupted by occlusion and noise.