no code implementations • 15 Oct 2024 • Donald Loveland, Xinyi Wu, Tong Zhao, Danai Koutra, Neil Shah, Mingxuan Ju

Despite their high performance, encouraging dispersion for non-interacted pairs necessitates expensive regularization (e. g., negative sampling), hurting runtime and scalability.

no code implementations • 15 Oct 2024 • Jiacheng Lin, Kun Qian, Haoyu Han, Nurendra Choudhary, Tianxin Wei, Zhongruo Wang, Sahika Genc, Edward W Huang, Sheng Wang, Karthik Subbian, Danai Koutra, Jimeng Sun

Graph-structured information offers rich contextual information that can enhance language models by providing structured relationships and hierarchies, leading to more expressive embeddings for various applications such as retrieval, question answering, and classification.

no code implementations • 5 Oct 2024 • Donald Loveland, Danai Koutra

We then conduct a theoretical analysis that demonstrates how local homophily levels can alter predictions for differing sensitive attributes.

no code implementations • 26 Sep 2024 • Jiong Zhu, Gaotang Li, Yao-An Yang, Jing Zhu, Xuehao Cui, Danai Koutra

Heterophily, or the tendency of connected nodes in networks to have different class labels or dissimilar features, has been identified as challenging for many Graph Neural Network (GNN) models.

1 code implementation • 24 Jun 2024 • Jing Zhu, YuHang Zhou, Shengyi Qian, Zhongmou He, Tong Zhao, Neil Shah, Danai Koutra

Associating unstructured data with structured information is crucial for real-world tasks that require relevance search.

1 code implementation • 19 Jun 2024 • YuHang Zhou, Jing Zhu, Paiheng Xu, Xiaoyu Liu, Xiyao Wang, Danai Koutra, Wei Ai, Furong Huang

Large language models (LLMs) have significantly advanced various natural language processing tasks, but deploying them remains computationally expensive.

1 code implementation • 7 Jun 2024 • Zhongmou He, Jing Zhu, Shengyi Qian, Joyce Chai, Danai Koutra

To address the efficiency challenges at inference time, we introduce a retrieval-reranking scheme.

no code implementations • 7 Jun 2024 • Yu Wang, Ryan A. Rossi, Namyong Park, Huiyuan Chen, Nesreen K. Ahmed, Puja Trivedi, Franck Dernoncourt, Danai Koutra, Tyler Derr

To remedy this crucial gap, we propose a new class of graph generative model called Large Graph Generative Model (LGGM) that is trained on a large corpus of graphs (over 5000 graphs) from 13 different domains.

no code implementations • 7 Jan 2024 • Puja Trivedi, Mark Heimann, Rushil Anirudh, Danai Koutra, Jayaraman J. Thiagarajan

While graph neural networks (GNNs) are widely used for node and graph representation learning tasks, the reliability of GNN uncertainty estimates under distribution shifts remains relatively under-explored.

1 code implementation • 24 Dec 2023 • Charles Dickens, Eddie Huang, Aishwarya Reganti, Jiong Zhu, Karthik Subbian, Danai Koutra

Notably, CONVMATCH achieves up to 95% of the prediction performance of GNNs on node classification while trained on graphs summarized down to 1% the size of the original graph.

no code implementations • 29 Nov 2023 • Puja Trivedi, Ryan Rossi, David Arbour, Tong Yu, Franck Dernoncourt, Sungchul Kim, Nedim Lipka, Namyong Park, Nesreen K. Ahmed, Danai Koutra

Most real-world networks are noisy and incomplete samples from an unknown target distribution.

1 code implementation • 25 Sep 2023 • Jing Zhu, Xiang Song, Vassilis N. Ioannidis, Danai Koutra, Christos Faloutsos

How can we enhance the node features acquired from Pretrained Models (PMs) to better suit downstream graph learning tasks?

no code implementations • 20 Sep 2023 • Puja Trivedi, Mark Heimann, Rushil Anirudh, Danai Koutra, Jayaraman J. Thiagarajan

Safe deployment of graph neural networks (GNNs) under distribution shift requires models to provide accurate confidence indicators (CI).

1 code implementation • 26 Jun 2023 • Gaotang Li, Marlena Duda, Xiang Zhang, Danai Koutra, Yujun Yan

Based on these insights, we propose a new model, Interpretable Graph Sparsification (IGS), which enhances graph classification performance by up to 5. 1% with 55. 0% fewer edges.

no code implementations • 8 Jun 2023 • Donald Loveland, Jiong Zhu, Mark Heimann, Benjamin Fish, Michael T. Schaub, Danai Koutra

We ground the practical implications of this work through granular analysis on five real-world datasets with varying global homophily levels, demonstrating that (a) GNNs can fail to generalize to test nodes that deviate from the global homophily of a graph, and (b) high local homophily does not necessarily confer high performance for a node.

no code implementations • 1 Jun 2023 • Jing Zhu, YuHang Zhou, Vassilis N. Ioannidis, Shengyi Qian, Wei Ai, Xiang Song, Danai Koutra

While Graph Neural Networks (GNNs) are remarkably successful in a variety of high-impact applications, we demonstrate that, in link prediction, the common practices of including the edges being predicted in the graph at training and/or test have outsized impact on the performance of low-degree nodes.

no code implementations • 24 May 2023 • Gaotang Li, Danai Koutra, Yujun Yan

Our empirical results reveal that our proposed size-insensitive attention strategy substantially enhances graph classification performance on large test graphs, which are 2-10 times larger than the training graphs, resulting in an improvement in F1 scores by up to 8%.

1 code implementation • 17 May 2023 • Jiong Zhu, Aishwarya Reganti, Edward Huang, Charles Dickens, Nikhil Rao, Karthik Subbian, Danai Koutra

Backed by our theoretical analysis, instead of maximizing the recovery of cross-instance node dependencies -- which has been considered the key behind closing the performance gap between model aggregation and centralized training -- , our framework leverages randomized assignment of nodes or super-nodes (i. e., collections of original nodes) to partition the training graph such that it improves data uniformity and minimizes the discrepancy of gradient and loss function across instances.

no code implementations • 23 Mar 2023 • Puja Trivedi, Danai Koutra, Jayaraman J. Thiagarajan

Overall, our work carefully studies the effectiveness of popular scoring functions in realistic settings and helps to better understand their limitations.

no code implementations • 23 Mar 2023 • Puja Trivedi, Danai Koutra, Jayaraman J. Thiagarajan

Advances in the expressivity of pretrained models have increased interest in the design of adaptation protocols which enable safe and effective transfer learning.

1 code implementation • 23 Aug 2022 • Jing Zhu, Danai Koutra, Mark Heimann

Network alignment, or the task of finding corresponding nodes in different networks, is an important problem formulation in many application domains.

1 code implementation • 4 Aug 2022 • Puja Trivedi, Ekdeep Singh Lubana, Mark Heimann, Danai Koutra, Jayaraman J. Thiagarajan

Overall, our work rigorously contextualizes, both empirically and theoretically, the effects of data-centric properties on augmentation strategies and learning paradigms for graph SSL.

no code implementations • 26 Jul 2022 • Puja Trivedi, Danai Koutra, Jayaraman J. Thiagarajan

While directly fine-tuning (FT) large-scale, pretrained models on task-specific data is well-known to induce strong in-distribution task performance, recent works have demonstrated that different adaptation protocols, such as linear probing (LP) prior to FT, can improve out-of-distribution generalization.

no code implementations • 10 Jul 2022 • Donald Loveland, Jiong Zhu, Mark Heimann, Ben Fish, Michael T. Schaub, Danai Koutra

We study the task of node classification for graph neural networks (GNNs) and establish a connection between group fairness, as measured by statistical parity and equal opportunity, and local assortativity, i. e., the tendency of linked nodes to have similar attributes.

no code implementations • 4 Jul 2022 • Houquan Zhou, Shenghua Liu, Danai Koutra, HuaWei Shen, Xueqi Cheng

Recent works try to improve scalability via graph summarization -- i. e., they learn embeddings on a smaller summary graph, and then restore the node embeddings of the original graph.

1 code implementation • 9 Nov 2021 • Fatemeh Vahedian, Ruiyu Li, Puja Trivedi, Di Jin, Danai Koutra

Understanding the training dynamics of deep neural networks (DNNs) is important as it can lead to improved training efficiency and task performance.

no code implementations • 5 Nov 2021 • Puja Trivedi, Ekdeep Singh Lubana, Yujun Yan, Yaoqing Yang, Danai Koutra

Unsupervised graph representation learning is critical to a wide range of applications where labels may be scarce or expensive to procure.

1 code implementation • 27 Oct 2021 • Di Jin, Bunyamin Sisman, Hao Wei, Xin Luna Dong, Danai Koutra

AdaMEL models the attribute importance that is used to match entities through an attribute-level self-attention mechanism, and leverages the massive unlabeled data from new data sources through domain adaptation to make it generic and data-source agnostic.

no code implementations • 29 Sep 2021 • Puja Trivedi, Mark Heimann, Danai Koutra, Jayaraman J. Thiagarajan

Using the recent population augmentation graph-based analysis of self-supervised learning, we show theoretically that the success of GCL with popular augmentations is bounded by the graph edit distance between different classes.

1 code implementation • 14 Jun 2021 • Jiong Zhu, Junchen Jin, Donald Loveland, Michael T. Schaub, Danai Koutra

We bridge two research directions on graph neural networks (GNNs), by formalizing the relation between heterophily of node labels (i. e., connected nodes tend to have dissimilar labels) and the robustness of GNNs to adversarial attacks.

no code implementations • EMNLP 2021 • Tara Safavi, Danai Koutra

Relational knowledge bases (KBs) are commonly used to represent world knowledge in machines.

1 code implementation • 26 Feb 2021 • Jing Zhu, Xingyu Lu, Mark Heimann, Danai Koutra

While most network embedding techniques model the relative positions of nodes in a network, recently there has been significant interest in structural embeddings that model node role equivalences, irrespective of their distances to any specific nodes.

1 code implementation • 15 Feb 2021 • Caleb Belth, Alican Büyükçakır, Danai Koutra

Thus, link prediction methods, which often rely on proximity-preserving embeddings or heuristic notions of node similarity, face a vast search space, with many pairs that are in close proximity, but that should not be linked.

1 code implementation • 12 Feb 2021 • Yujun Yan, Milad Hashemi, Kevin Swersky, Yaoqing Yang, Danai Koutra

We are the first to take a unified perspective to jointly explain the oversmoothing and heterophily problems at the node level.

2 code implementations • 4 Feb 2021 • Ekdeep Singh Lubana, Puja Trivedi, Danai Koutra, Robert P. Dick

Catastrophic forgetting undermines the effectiveness of deep neural networks (DNNs) in scenarios such as continual learning and lifelong learning.

1 code implementation • 14 Jan 2021 • Junchen Jin, Mark Heimann, Di Jin, Danai Koutra

While most network embedding techniques model the proximity between nodes in a network, recently there has been significant interest in structural embeddings that are based on node equivalences, a notion rooted in sociology: equivalences or positions are collections of nodes that have similar roles--i. e., similar functions, ties or interactions with nodes in other positions--irrespective of their distance or reachability in the network.

Network Embedding Social and Information Networks

1 code implementation • EMNLP 2021 • Tara Safavi, Jing Zhu, Danai Koutra

Codifying commonsense knowledge in machines is a longstanding goal of artificial intelligence.

1 code implementation • 28 Sep 2020 • Jiong Zhu, Ryan A. Rossi, Anup Rao, Tung Mai, Nedim Lipka, Nesreen K. Ahmed, Danai Koutra

Graph Neural Networks (GNNs) have proven to be useful for many different practical applications.

no code implementations • 21 Sep 2020 • Di Jin, Sungchul Kim, Ryan A. Rossi, Danai Koutra

While previous work on dynamic modeling and embedding has focused on representing a stream of timestamped edges using a time-series of graphs based on a specific time-scale (e. g., 1 month), we propose the notion of an $\epsilon$-graph time-series that uses a fixed number of edges for each graph, and show its superiority over the time-scale representation used in previous work.

2 code implementations • EMNLP 2020 • Tara Safavi, Danai Koutra

We present CoDEx, a set of knowledge graph completion datasets extracted from Wikidata and Wikipedia that improve upon existing knowledge graph completion benchmarks in scope and level of difficulty.

Ranked #2 on Link Prediction on CoDEx Large

1 code implementation • 30 Jul 2020 • Kyle K. Qin, Flora D. Salim, Yongli Ren, Wei Shao, Mark Heimann, Danai Koutra

In this paper, we propose a framework, called G-CREWE (Graph CompREssion With Embedding) to solve the network alignment problem.

1 code implementation • 27 Jun 2020 • Caleb Belth, Xinyi Zheng, Danai Koutra

Frequent pattern mining is a key area of study that gives insights into the structure and dynamics of evolving networks, such as social or road networks.

4 code implementations • NeurIPS 2020 • Jiong Zhu, Yujun Yan, Lingxiao Zhao, Mark Heimann, Leman Akoglu, Danai Koutra

We investigate the representation power of graph neural networks in the semi-supervised node classification task under heterophily or low homophily, i. e., in networks where connected nodes may have different class labels and dissimilar features.

Graph Neural Network Node Classification on Non-Homophilic (Heterophilic) Graphs

1 code implementation • NeurIPS 2020 • Yujun Yan, Kevin Swersky, Danai Koutra, Parthasarathy Ranganathan, Milad Hashemi

A significant effort has been made to train neural networks that replicate algorithmic reasoning, but they often fail to learn the abstract concepts underlying these algorithms.

1 code implementation • 10 May 2020 • Xiyuan Chen, Mark Heimann, Fatemeh Vahedian, Danai Koutra

Network alignment, the process of finding correspondences between nodes in different graphs, has many scientific and industrial applications.

no code implementations • EMNLP 2020 • Tara Safavi, Danai Koutra, Edgar Meij

We first conduct an evaluation under the standard closed-world assumption (CWA), in which predicted triples not already in the knowledge graph are considered false, and show that existing calibration techniques are effective for KGE under this common but narrow assumption.

1 code implementation • 23 Mar 2020 • Caleb Belth, Xinyi Zheng, Jilles Vreeken, Danai Koutra

We apply our rules to three large KGs (NELL, DBpedia, and Yago), and tasks such as compression, various types of error detection, and identification of incomplete information.

no code implementations • ICLR 2020 • Yujun Yan, Kevin Swersky, Danai Koutra, Parthasarathy Ranganathan, Milad Hashemi

Turing complete computation and reasoning are often regarded as necessary pre- cursors to general intelligence.

no code implementations • 22 Aug 2019 • Ryan A. Rossi, Di Jin, Sungchul Kim, Nesreen K. Ahmed, Danai Koutra, John Boaz Lee

Unfortunately, recent work has sometimes confused the notion of structural roles and communities (based on proximity) leading to misleading or incorrect claims about the capabilities of network embedding methods.

1 code implementation • 18 Apr 2019 • Di Jin, Mark Heimann, Ryan Rossi, Danai Koutra

Identity stitching, the task of identifying and matching various online references (e. g., sessions over different devices and timespans) to the same user in real-world web services, is crucial for personalization and recommendations.

1 code implementation • 11 Nov 2018 • Di Jin, Ryan Rossi, Danai Koutra, Eunyee Koh, Sungchul Kim, Anup Rao

Motivated by the computational and storage challenges that dense embeddings pose, we introduce the problem of latent network summarization that aims to learn a compact, latent representation of the graph structure with dimensionality that is independent of the input graph size (i. e., #nodes and #edges), while retaining the ability to derive node representations on the fly.

Social and Information Networks

no code implementations • 3 May 2018 • Saba A. Al-Sayouri, Ekta Gujral, Danai Koutra, Evangelos E. Papalexakis, Sarah S. Lam

Contrary to baseline methods, which generally learn explicit graph representations by solely using an adjacency matrix, t-PINE avails a multi-view information graph, the adjacency matrix represents the first view, and a nearest neighbor adjacency, computed over the node features, is the second view, in order to learn explicit and implicit node representations, using the Canonical Polyadic (a. k. a.

no code implementations • 3 May 2018 • Saba A. Al-Sayouri, Danai Koutra, Evangelos E. Papalexakis, Sarah S. Lam

Representation learning algorithms aim to preserve local and global network structure by identifying node neighborhood notions.

1 code implementation • 17 Feb 2018 • Mark Heimann, Haoming Shen, Tara Safavi, Danai Koutra

Problems involving multiple networks are prevalent in many scientific and other domains.

Social and Information Networks

1 code implementation • 18 Oct 2017 • Josh Gardner, Danai Koutra, Jawad Mroueh, Victor Pang, Arya Farahi, Sam Krassenstein, Jared Webb

Understanding the existence of patterns and trends in this data could be useful to a variety of stakeholders, particularly as Detroit emerges from Chapter 9 bankruptcy, but the patterns in such data are often complex and multivariate and the city lacks dedicated resources for detailed analysis of this data.

Computers and Society

no code implementations • 14 Dec 2016 • Yike Liu, Tara Safavi, Abhilash Dighe, Danai Koutra

While advances in computing resources have made processing enormous amounts of data possible, human ability to identify patterns in such data has not scaled accordingly.

1 code implementation • 27 Jun 2014 • Wolfgang Gatterbauer, Stephan Günnemann, Danai Koutra, Christos Faloutsos

Often, we can answer such questions and label nodes in a network based on the labels of their neighbors and appropriate assumptions of homophily ("birds of a feather flock together") or heterophily ("opposites attract").

1 code implementation • 18 Apr 2014 • Leman Akoglu, Hanghang Tong, Danai Koutra

This survey aims to provide a general, comprehensive, and structured overview of the state-of-the-art methods for anomaly detection in data represented as graphs.

Social and Information Networks Cryptography and Security

no code implementations • 12 Sep 2012 • Michele Berlingerio, Danai Koutra, Tina Eliassi-Rad, Christos Faloutsos

Having such features will enable a wealth of graph mining tasks, including clustering, outlier detection, visualization, etc.

Social and Information Networks Physics and Society Applications

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.