no code implementations • 14 Jul 2022 • Alessandro Epasto, Vahab Mirrokni, Bryan Perozzi, Anton Tsitsulin, Peilin Zhong
Personalized PageRank (PPR) is a fundamental tool in unsupervised learning of graph representations such as node ranking, labeling, and graph embedding.
no code implementations • 10 Jul 2022 • Minji Yoon, Yue Wu, John Palowitch, Bryan Perozzi, Ruslan Salakhutdinov
A surge of interest in Graph Convolutional Networks (GCN) has produced thousands of GCN variants, with hundreds introduced every year.
1 code implementation • 7 Jul 2022 • Oleksandr Ferludin, Arno Eigenwillig, Martin Blais, Dustin Zelle, Jan Pfeifer, Alvaro Sanchez-Gonzalez, Sibon Li, Sami Abu-El-Haija, Peter Battaglia, Neslihan Bulut, Jonathan Halcrow, Filipe Miguel Gonçalves de Almeida, Silvio Lattanzi, André Linhares, Brandon Mayer, Vahab Mirrokni, John Palowitch, Mihir Paradkar, Jennifer She, Anton Tsitsulin, Kevin Villela, Lisa Wang, David Wong, Bryan Perozzi
TensorFlow GNN (TF-GNN) is a scalable library for Graph Neural Networks in TensorFlow.
no code implementations • 20 May 2022 • Seyed Mehran Kazemi, Anton Tsitsulin, Hossein Esfandiari, Mohammadhossein Bateni, Deepak Ramachandran, Bryan Perozzi, Vahab Mirrokni
Representative selection (RS) is the problem of finding a small subset of exemplars from an unlabeled dataset, and has numerous applications in summarization, active learning, data compression and many other domains.
1 code implementation • 4 Apr 2022 • Anton Tsitsulin, Benedek Rozemberczki, John Palowitch, Bryan Perozzi
This shockingly small sample size (~10) allows for only limited scientific insight into the problem.
no code implementations • 3 Mar 2022 • Minji Yoon, John Palowitch, Dustin Zelle, Ziniu Hu, Ruslan Salakhutdinov, Bryan Perozzi
We propose a zero-shot transfer learning module for HGNNs called a Knowledge Transfer Network (KTN) that transfers knowledge from label-abundant node types to zero-labeled node types through rich relational information given in the HG.
1 code implementation • 28 Feb 2022 • John Palowitch, Anton Tsitsulin, Brandon Mayer, Bryan Perozzi
Using GraphWorld, a user has fine-grained control over graph generator parameters, and can benchmark arbitrary GNN models with built-in hyperparameter tuning.
1 code implementation • NeurIPS 2021 • Qi Zhu, Natalia Ponomareva, Jiawei Han, Bryan Perozzi
In this work we present a method, Shift-Robust GNN (SR-GNN), designed to account for distributional differences between biased training data and the graph's true inference distribution.
1 code implementation • ICLR 2021 • Elan Markowitz, Keshav Balasubramanian, Mehrnoosh Mirtaheri, Sami Abu-El-Haija, Bryan Perozzi, Greg Ver Steeg, Aram Galstyan
We propose Graph Traversal via Tensor Functionals(GTTF), a unifying meta-algorithm framework for easing the implementation of diverse graph algorithms and enabling transparent and efficient scaling to large graphs.
1 code implementation • 24 Oct 2020 • Benedek Rozemberczki, Peter Englert, Amol Kapoor, Martin Blais, Bryan Perozzi
Additional results from a challenging suite of node classification experiments show how PDNs can learn a wider class of functions than existing baselines.
no code implementations • 14 Oct 2020 • Ştefan Postăvaru, Anton Tsitsulin, Filipe Miguel Gonçalves de Almeida, Yingtao Tian, Silvio Lattanzi, Bryan Perozzi
In this paper, we introduce InstantEmbedding, an efficient method for generating single-node representations using local PageRank computations.
no code implementations • 23 Jul 2020 • Jonathan Halcrow, Alexandru Moşoi, Sam Ruth, Bryan Perozzi
Interestingly, there are often many types of similarity available to choose as the edges between nodes, and the choice of edges can drastically affect the performance of downstream semi-supervised learning systems.
3 code implementations • 6 Jul 2020 • Amol Kapoor, Xue Ben, Luyang Liu, Bryan Perozzi, Matt Barnes, Martin Blais, Shawn O'Banion
In this work, we examine a novel forecasting approach for COVID-19 case prediction that uses Graph Neural Networks and mobility data.
2 code implementations • 3 Jul 2020 • Aleksandar Bojchevski, Johannes Gasteiger, Bryan Perozzi, Amol Kapoor, Martin Blais, Benedek Rózemberczki, Michal Lukasik, Stephan Günnemann
Graph neural networks (GNNs) have emerged as a powerful approach for solving many network mining tasks.
no code implementations • 30 Jun 2020 • Anton Tsitsulin, John Palowitch, Bryan Perozzi, Emmanuel Müller
Graph Neural Networks (GNNs) have achieved state-of-the-art results on many graph analysis tasks such as node classification and link prediction.
1 code implementation • 7 May 2020 • Ines Chami, Sami Abu-El-Haija, Bryan Perozzi, Christopher Ré, Kevin Murphy
The second, graph regularized neural networks, leverages graphs to augment neural network losses with a regularization objective for semi-supervised learning.
no code implementations • 3 Mar 2020 • Anton Tsitsulin, Marina Munkhoeva, Bryan Perozzi
Graph comparison is a fundamental operation in data mining and information retrieval.
no code implementations • 25 Sep 2019 • John Palowitch, Bryan Perozzi
In this paper, we show that when metadata is correlated with the formation of node neighborhoods, unsupervised node embedding dimensions learn this metadata.
2 code implementations • 6 May 2019 • Alessandro Epasto, Bryan Perozzi
Recent interest in graph embedding methods has focused on learning a single representation for each node in the graph.
3 code implementations • 30 Apr 2019 • Sami Abu-El-Haija, Bryan Perozzi, Amol Kapoor, Nazanin Alipourfard, Kristina Lerman, Hrayr Harutyunyan, Greg Ver Steeg, Aram Galstyan
Existing popular methods for semi-supervised learning with Graph Neural Networks (such as the Graph Convolutional Network) provably cannot learn a general class of neighborhood mixing relationships.
Ranked #27 on
Node Classification
on Pubmed
1 code implementation • 21 Apr 2019 • Rami Al-Rfou, Dustin Zelle, Bryan Perozzi
Second, for each pair of graphs, we train a cross-graph attention network which uses the node representations of an anchor graph to reconstruct another graph.
Ranked #3 on
Graph Classification
on D&D
1 code implementation • WWW 2019 • Alessandro Epasto, Bryan Perozzi
Recent interest in graph embedding methods has focused on learning a single representation for each node in the graph.
1 code implementation • 13 Sep 2018 • Haochen Chen, Xiaofei Sun, Yingtao Tian, Bryan Perozzi, Muhao Chen, Steven Skiena
Network embedding methods aim at learning low-dimensional latent representation of nodes in a network.
Social and Information Networks Physics and Society
2 code implementations • 8 Aug 2018 • Haochen Chen, Bryan Perozzi, Rami Al-Rfou, Steven Skiena
We further demonstrate the applications of network embeddings, and conclude the survey with future work in this area.
Social and Information Networks
1 code implementation • 24 Feb 2018 • Sami Abu-El-Haija, Amol Kapoor, Bryan Perozzi, Joonseok Lee
Graph Convolutional Networks (GCNs) have shown significant improvements in semi-supervised learning on graph-structured data.
Ranked #39 on
Node Classification
on Pubmed
no code implementations • ICLR 2018 • Sami Abu-El-Haija, Amol Kapoor, Bryan Perozzi, Joonseok Lee
Graph Convolutional Networks (GCNs) are a recently proposed architecture which has had success in semi-supervised learning on graph-structured data.
2 code implementations • NeurIPS 2018 • Sami Abu-El-Haija, Bryan Perozzi, Rami Al-Rfou, Alex Alemi
Graph embedding methods represent nodes in a continuous vector space, preserving information from the graph (e. g. by sampling random walks).
Ranked #62 on
Node Classification
on Citeseer
3 code implementations • 23 Jun 2017 • Haochen Chen, Bryan Perozzi, Yifan Hu, Steven Skiena
We present HARP, a novel method for learning low dimensional embeddings of a graph's nodes which preserves higher-order structural features.
Social and Information Networks
1 code implementation • 16 May 2017 • Sami Abu-El-Haija, Bryan Perozzi, Rami Al-Rfou
Individually, both of these contributions improve the learned representations, especially when there are memory constraints on the total size of the embeddings.
no code implementations • 12 May 2016 • Yingtao Tian, Vivek Kulkarni, Bryan Perozzi, Steven Skiena
Do word embeddings converge to learn similar things over different initializations?
2 code implementations • 6 May 2016 • Bryan Perozzi, Vivek Kulkarni, Haochen Chen, Steven Skiena
We present Walklets, a novel approach for learning multiscale representations of vertices in a network.
Social and Information Networks Physics and Society
2 code implementations • 22 Oct 2015 • Vivek Kulkarni, Bryan Perozzi, Steven Skiena
Our analysis of British and American English over a period of 100 years reveals that semantic variation between these dialects is shrinking.
no code implementations • 12 Nov 2014 • Vivek Kulkarni, Rami Al-Rfou, Bryan Perozzi, Steven Skiena
We propose a new computational approach for tracking and detecting statistically significant linguistic shifts in the meaning and usage of words.
no code implementations • 14 Oct 2014 • Rami Al-Rfou, Vivek Kulkarni, Bryan Perozzi, Steven Skiena
We describe a system that builds Named Entity Recognition (NER) annotators for 40 major languages using Wikipedia and Freebase.
no code implementations • 5 Apr 2014 • Vivek Kulkarni, Rami Al-Rfou', Bryan Perozzi, Steven Skiena
We evaluate the performance of training the model on the GPU and present optimizations that boost the performance on the GPU. One of the key optimizations, we propose increases the performance of a function involved in calculating and updating the gradient by approximately 50 times on the GPU for sufficiently large batch sizes.
14 code implementations • 26 Mar 2014 • Bryan Perozzi, Rami Al-Rfou, Steven Skiena
We present DeepWalk, a novel approach for learning latent representations of vertices in a network.
Ranked #1 on
Link Property Prediction
on ogbl-ppa
no code implementations • 6 Mar 2014 • Bryan Perozzi, Rami Al-Rfou, Vivek Kulkarni, Steven Skiena
Recent advancements in unsupervised feature learning have developed powerful latent representations of words.
no code implementations • WS 2013 • Rami Al-Rfou, Bryan Perozzi, Steven Skiena
We quantitatively demonstrate the utility of our word embeddings by using them as the sole features for training a part of speech tagger for a subset of these languages.
no code implementations • 15 Jan 2013 • Yanqing Chen, Bryan Perozzi, Rami Al-Rfou, Steven Skiena
We seek to better understand the difference in quality of the several publicly released embeddings.