Search Results for author: Keith Levin

Found 10 papers, 1 papers with code

Vertex Nomination in Richly Attributed Networks

no code implementations29 Apr 2020 Keith Levin, Carey E. Priebe, Vince Lyzinski

In this paper, we explore, both theoretically and practically, the dual roles of content (i. e., edge and vertex attributes) and context (i. e., network topology) in vertex nomination.

Information Retrieval Retrieval

Limit theorems for out-of-sample extensions of the adjacency and Laplacian spectral embeddings

no code implementations29 Sep 2019 Keith Levin, Fred Roosta, Minh Tang, Michael W. Mahoney, Carey E. Priebe

In both cases, we prove that when the underlying graph is generated according to a latent space model called the random dot product graph, which includes the popular stochastic block model as a special case, an out-of-sample extension based on a least-squares objective obeys a central limit theorem about the true latent position of the out-of-sample vertex.

Dimensionality Reduction Graph Embedding +1

Recovering shared structure from multiple networks with unknown edge distributions

no code implementations13 Jun 2019 Keith Levin, Asad Lodhia, Elizaveta Levina

In increasingly many settings, data sets consist of multiple samples from a population of networks, with vertices aligned across these networks.

Out-of-sample extension of graph adjacency spectral embedding

no code implementations ICML 2018 Keith Levin, Farbod Roosta-Khorasani, Michael W. Mahoney, Carey E. Priebe

Many popular dimensionality reduction procedures have out-of-sample extensions, which allow a practitioner to apply a learned embedding to observations not seen in the initial training sample.

Dimensionality Reduction Position

Vertex nomination: The canonical sampling and the extended spectral nomination schemes

no code implementations14 Feb 2018 Jordan Yoder, Li Chen, Henry Pao, Eric Bridgeford, Keith Levin, Donniell Fishkind, Carey Priebe, Vince Lyzinski

There are vertex nomination schemes in the literature, including the optimally precise canonical nomination scheme~$\mathcal{L}^C$ and the consistent spectral partitioning nomination scheme~$\mathcal{L}^P$.

Clustering Stochastic Block Model

On consistent vertex nomination schemes

no code implementations15 Nov 2017 Vince Lyzinski, Keith Levin, Carey E. Priebe

Given a vertex of interest in a network $G_1$, the vertex nomination problem seeks to find the corresponding vertex of interest (if it exists) in a second network $G_2$.

Information Retrieval Retrieval

Statistical inference on random dot product graphs: a survey

no code implementations16 Sep 2017 Avanti Athreya, Donniell E. Fishkind, Keith Levin, Vince Lyzinski, Youngser Park, Yichen Qin, Daniel L. Sussman, Minh Tang, Joshua T. Vogelstein, Carey E. Priebe

In this survey paper, we describe a comprehensive paradigm for statistical inference on random dot product graphs, a paradigm centered on spectral embeddings of adjacency and Laplacian matrices.

Community Detection

Query-by-Example Search with Discriminative Neural Acoustic Word Embeddings

1 code implementation12 Jun 2017 Shane Settle, Keith Levin, Herman Kamper, Karen Livescu

Query-by-example search often uses dynamic time warping (DTW) for comparing queries and proposed matching segments.

Dynamic Time Warping Word Embeddings

On the Consistency of the Likelihood Maximization Vertex Nomination Scheme: Bridging the Gap Between Maximum Likelihood Estimation and Graph Matching

no code implementations5 Jul 2016 Vince Lyzinski, Keith Levin, Donniell E. Fishkind, Carey E. Priebe

Given a graph in which a few vertices are deemed interesting a priori, the vertex nomination task is to order the remaining vertices into a nomination list such that there is a concentration of interesting vertices at the top of the list.

Graph Matching Stochastic Block Model

Laplacian Eigenmaps from Sparse, Noisy Similarity Measurements

no code implementations12 Mar 2016 Keith Levin, Vince Lyzinski

In particular, we consider Laplacian eigenmaps embeddings based on a kernel matrix, and explore how the embeddings behave when this kernel matrix is corrupted by occlusion and noise.

Dimensionality Reduction

Cannot find the paper you are looking for? You can Submit a new open access paper.