no code implementations • 27 Mar 2024 • Mingxuan Ju, William Shiao, Zhichun Guo, Yanfang Ye, Yozen Liu, Neil Shah, Tong Zhao
A branch of research enhances CF methods by message passing used in graph neural networks, due to its strong capabilities of extracting knowledge from graph-structured data, like user-item bipartite graphs that naturally exist in CF.
no code implementations • 27 Mar 2024 • William Shiao, Mingxuan Ju, Zhichun Guo, Xin Chen, Evangelos Papalexakis, Tong Zhao, Neil Shah, Yozen Liu
This work focuses on a complementary problem: recommending new users and items unseen (out-of-vocabulary, or OOV) at training time.
no code implementations • 15 Feb 2024 • Zhichun Guo, Tong Zhao, Yozen Liu, Kaiwen Dong, William Shiao, Neil Shah, Nitesh V. Chawla
Graph Neural Networks (GNNs) are prominent in graph machine learning and have shown state-of-the-art performance in Link Prediction (LP) tasks.
2 code implementations • 13 Feb 2024 • Runjin Chen, Tong Zhao, Ajay Jaiswal, Neil Shah, Zhangyang Wang
Graph Neural Networks (GNNs) have empowered the advance in graph-structured data analysis.
no code implementations • 3 Feb 2024 • Jingzhe Liu, Haitao Mao, Zhikai Chen, Tong Zhao, Neil Shah, Jiliang Tang
In this work, we delve into neural scaling laws on graphs from both model and data perspectives.
no code implementations • 3 Feb 2024 • Haitao Mao, Zhikai Chen, Wenzhuo Tang, Jianan Zhao, Yao Ma, Tong Zhao, Neil Shah, Mikhail Galkin, Jiliang Tang
Graph Foundation Model (GFM) is a new trending research topic in the graph domain, aiming to develop a graph model capable of generalizing across different graphs and tasks.
1 code implementation • 18 Dec 2023 • Vijay Prakash Dwivedi, Yozen Liu, Anh Tuan Luu, Xavier Bresson, Neil Shah, Tong Zhao
As such, a key innovation of this work lies in the creation of a fast neighborhood sampling technique coupled with a local attention mechanism that encompasses a 4-hop reception field, but achieved through just 2-hop operations.
1 code implementation • 6 Oct 2023 • Yu Wang, Tong Zhao, Yuying Zhao, Yunchao Liu, Xueqi Cheng, Neil Shah, Tyler Derr
Despite the widespread belief that low-degree nodes exhibit poorer LP performance, our empirical findings provide nuances to this viewpoint and prompt us to propose a better metric, Topological Concentration (TC), based on the intersection of the local subgraph of each node with the ones of its neighbors.
1 code implementation • 1 Oct 2023 • Haitao Mao, Juanhui Li, Harry Shomer, Bingheng Li, Wenqi Fan, Yao Ma, Tong Zhao, Neil Shah, Jiliang Tang
We recognize three fundamental factors critical to link prediction: local structural proximity, global structural proximity, and feature proximity.
no code implementations • 3 Jul 2023 • Neha Sahipjohn, Neil Shah, Vishal Tambrahalli, Vineet Gandhi
Significant progress has been made in speaker dependent Lip-to-Speech synthesis, which aims to generate speech from silent videos of talking faces.
1 code implementation • NeurIPS 2023 • Juanhui Li, Harry Shomer, Haitao Mao, Shenglai Zeng, Yao Ma, Neil Shah, Jiliang Tang, Dawei Yin
Furthermore, new and diverse datasets have also been created to better evaluate the effectiveness of these new models.
no code implementations • 12 Jun 2023 • William Shiao, Uday Singh Saini, Yozen Liu, Tong Zhao, Neil Shah, Evangelos E. Papalexakis
CARL-G is adaptable to different clustering methods and CVIs, and we show that with the right choice of clustering method and CVI, CARL-G outperforms node classification baselines on 4/5 datasets with up to a 79x training speedup compared to the best-performing baseline.
1 code implementation • NeurIPS 2023 • Haitao Mao, Zhikai Chen, Wei Jin, Haoyu Han, Yao Ma, Tong Zhao, Neil Shah, Jiliang Tang
Recent studies on Graph Neural Networks(GNNs) provide both empirical and theoretical evidence supporting their effectiveness in capturing structural patterns on both homophilic and certain heterophilic graphs.
no code implementations • 19 May 2023 • Neil Shah, Vishal Tambrahalli, Saiteja Kosgi, Niranjan Pedanekar, Vineet Gandhi
We present MParrotTTS, a unified multilingual, multi-speaker text-to-speech (TTS) synthesis model that can produce high-quality speech.
no code implementations • 1 Mar 2023 • Neil Shah, Saiteja Kosgi, Vishal Tambrahalli, Neha Sahipjohn, Niranjan Pedanekar, Vineet Gandhi
We present ParrotTTS, a modularized text-to-speech synthesis model leveraging disentangled self-supervised speech representations.
1 code implementation • 25 Nov 2022 • William Shiao, Zhichun Guo, Tong Zhao, Evangelos E. Papalexakis, Yozen Liu, Neil Shah
In this work, we extensively evaluate the performance of existing non-contrastive methods for link prediction in both transductive and inductive settings.
1 code implementation • 18 Oct 2022 • Lingxiao Zhao, Louis Härtel, Neil Shah, Leman Akoglu
Our model is practical and progressively-expressive, increasing in power with k and c. We demonstrate effectiveness on several benchmark datasets, achieving several state-of-the-art results with runtime and memory usage applicable to practical graphs.
no code implementations • 17 Oct 2022 • Rishav Chourasia, Neil Shah
Unlearning algorithms aim to remove deleted data's influence from trained models at a cost lower than full retraining.
no code implementations • 11 Oct 2022 • Zhichun Guo, William Shiao, Shichang Zhang, Yozen Liu, Nitesh V. Chawla, Neil Shah, Tong Zhao
In this work, to combine the advantages of GNNs and MLPs, we start with exploring direct knowledge distillation (KD) methods for link prediction, i. e., predicted logit-based matching and node representation-based matching.
1 code implementation • 7 Oct 2022 • Wei Jin, Tong Zhao, Jiayuan Ding, Yozen Liu, Jiliang Tang, Neil Shah
In this work, we provide a data-centric view to tackle these issues and propose a graph transformation framework named GTrans which adapts and refines graph data at test time to achieve better performance.
1 code implementation • 5 Oct 2022 • Mingxuan Ju, Tong Zhao, Qianlong Wen, Wenhao Yu, Neil Shah, Yanfang Ye, Chuxu Zhang
Besides, we observe that learning from multiple philosophies enhances not only the task generalization but also the single task performances, demonstrating that PARETOGNN achieves better task generalization via the disjoint yet complementary knowledge learned from different philosophies.
2 code implementations • 30 Sep 2022 • Xiaotian Han, Tong Zhao, Yozen Liu, Xia Hu, Neil Shah
Training graph neural networks (GNNs) on large graphs is complex and extremely time consuming.
no code implementations • 17 Sep 2022 • Yiwei Wang, Bryan Hooi, Yozen Liu, Tong Zhao, Zhichun Guo, Neil Shah
However, HadamardMLP lacks the scalability for retrieving top scoring neighbors on large graphs, since to the best of our knowledge, there does not exist an algorithm to retrieve the top scoring neighbors for HadamardMLP decoders in sublinear complexity.
1 code implementation • 21 May 2022 • Juanhui Li, Harry Shomer, Jiayuan Ding, Yiqi Wang, Yao Ma, Neil Shah, Jiliang Tang, Dawei Yin
This suggests a conflation of scoring function design, loss function design, and MP in prior work, with promising insights regarding the scalability of state-of-the-art KGC methods today, as well as careful attention to more suitable MP designs for KGC tasks tomorrow.
1 code implementation • 17 Feb 2022 • Tong Zhao, Wei Jin, Yozen Liu, Yingheng Wang, Gang Liu, Stephan Günnemann, Neil Shah, Meng Jiang
Overall, our work aims to clarify the landscape of existing literature in graph data augmentation and motivates additional work in this area, providing a helpful resource for researchers and practitioners in the broader graph machine learning domain.
1 code implementation • 28 Jan 2022 • Shichang Zhang, Yozen Liu, Neil Shah, Yizhou Sun
Explaining machine learning models is an important and increasingly popular area of research interest.
2 code implementations • 1 Dec 2021 • Yu Wang, Yuying Zhao, Neil Shah, Tyler Derr
To this end, we introduce a novel framework, Graph-of-Graph Neural Networks (G$^2$GNN), which alleviates the graph imbalance issue by deriving extra supervision globally from neighboring graphs and locally from stochastic augmentations of graphs.
1 code implementation • ICLR 2022 • Shichang Zhang, Yozen Liu, Yizhou Sun, Neil Shah
Conversely, multi-layer perceptrons (MLPs) have no graph dependency and infer much faster than GNNs, even though they are less accurate than GNNs for node classification in general.
Ranked #3 on Node Classification on AMZ Computers
2 code implementations • ICLR 2022 • Wei Jin, Lingxiao Zhao, Shichang Zhang, Yozen Liu, Jiliang Tang, Neil Shah
Given the prevalence of large-scale graphs in real-world applications, the storage and time for training neural models have raised increasing concerns.
2 code implementations • ICLR 2022 • Lingxiao Zhao, Wei Jin, Leman Akoglu, Neil Shah
We choose the subgraph encoder to be a GNN (mainly MPNNs, considering scalability) to design a general framework that serves as a wrapper to up-lift any GNN.
Ranked #16 on Graph Property Prediction on ogbg-molpcba
no code implementations • ICLR 2022 • Yao Ma, Xiaorui Liu, Neil Shah, Jiliang Tang
We find that this claim is not quite true, and in fact, GCNs can achieve strong performance on heterophilous graphs under certain conditions.
1 code implementation • ICLR 2022 • Wei Jin, Xiaorui Liu, Xiangyu Zhao, Yao Ma, Neil Shah, Jiliang Tang
Then we propose the AutoSSL framework which can automatically search over combinations of various self-supervised tasks.
1 code implementation • 8 Jun 2021 • Siddharth Bhatia, Mohit Wadhwa, Kenji Kawaguchi, Neil Shah, Philip S. Yu, Bryan Hooi
This higher-order sketch has the useful property of preserving the dense subgraph structure (dense subgraphs in the input turn into dense submatrices in the data structure).
no code implementations • 15 Feb 2021 • Sara Abdali, Neil Shah, Evangelos E. Papalexakis
In this work, we introduce a novel generalization of graphs i. e., K-Nearest Hyperplanes graph (KNH) where the nodes are defined by higher order Euclidean subspaces for multi-view modeling of the nodes.
no code implementations • 15 Feb 2021 • Sara Abdali, Rutuja Gurav, Siddharth Menon, Daniel Fonseca, Negin Entezari, Neil Shah, Evangelos E. Papalexakis
To capture this overall look, we take screenshots of news articles served by either misinformative or trustworthy web domains and leverage a tensor decomposition based semi-supervised classification technique.
1 code implementation • 5 Dec 2020 • Shubhranshu Shekhar, Neil Shah, Leman Akoglu
Fairness and Outlier Detection (OD) are closely related, as it is exactly the goal of OD to spot rare, minority samples in a given population.
1 code implementation • COLING 2020 • Brihi Joshi, Neil Shah, Francesco Barbieri, Leonardo Neves
Contextual embeddings derived from transformer-based neural language models have shown state-of-the-art performance for various tasks such as question answering, sentiment analysis, and textual similarity in recent years.
1 code implementation • 20 Oct 2020 • Tong Zhao, Bo Ni, Wenhao Yu, Zhichun Guo, Neil Shah, Meng Jiang
With Eland, anomaly detection performance at an earlier stage is better than non-augmented methods that need significantly more observed data by up to 15% on the Area under the ROC curve.
1 code implementation • 5 Oct 2020 • Yao Ma, Xiaorui Liu, Tong Zhao, Yozen Liu, Jiliang Tang, Neil Shah
In this work, we establish mathematically that the aggregation processes in a group of representative GNN models including GCN, GAT, PPNP, and APPNP can be regarded as (approximately) solving a graph denoising problem with a smoothness assumption.
2 code implementations • 11 Jun 2020 • Tong Zhao, Yozen Liu, Leonardo Neves, Oliver Woodford, Meng Jiang, Neil Shah
Our work shows that neural edge predictors can effectively encode class-homophilic structure to promote intra-class edges and demote inter-class edges in given graph structure, and our main contribution introduces the GAug graph data augmentation framework, which leverages these insights to improve performance in GNN-based node classification via edge prediction.
Ranked #1 on Node Classification on Flickr
no code implementations • 10 Jun 2020 • Xianfeng Tang, Yozen Liu, Neil Shah, Xiaolin Shi, Prasenjit Mitra, Suhang Wang
In this paper, we study a novel problem of explainable user engagement prediction for social network Apps.
1 code implementation • 8 May 2020 • Sara Abdali, Neil Shah, Evangelos E. Papalexakis
Distinguishing between misinformation and real information is one of the most challenging problems in today's interconnected world.
no code implementations • 28 Apr 2020 • Michael McCoyd, Won Park, Steven Chen, Neil Shah, Ryan Roggenkemper, Minjune Hwang, Jason Xinyu Liu, David Wagner
We propose a defense against patch attacks based on partially occluding the image around each candidate patch location, so that a few occlusions each completely hide the patch.
1 code implementation • 19 Aug 2019 • Hamed Nilforoshan, Neil Shah
Given the reach of web platforms, bad actors have considerable incentives to manipulate and defraud users at the expense of platform integrity.
no code implementations • 1 Aug 2018 • Rohan Kumar, Mohit Kumar, Neil Shah, Christos Faloutsos
In this paper, we address the problem of evaluating whether results served by an e-commerce search engine for a query are good or not.
no code implementations • 24 Apr 2018 • Gisel Bastidas Guacho, Sara Abdali, Neil Shah, Evangelos E. Papalexakis
Most existing works on this topic focus on manual feature extraction and supervised classification models leveraging a large number of labeled (fake or real) articles.
3 code implementations • 23 Apr 2018 • Srijan Kumar, Neil Shah
False information can be created and spread easily through the web and social media platforms, resulting in widespread real-world impact.
no code implementations • 5 Apr 2017 • Neil Shah, Hemank Lamba, Alex Beutel, Christos Faloutsos
Most past work on social network link fraud detection tries to separate genuine users from fraudsters, implicitly assuming that there is only one type of fraudulent behavior.
no code implementations • 4 Oct 2016 • Neil Shah
Livestreaming platforms have become increasingly popular in recent years as a means of sharing and advertising creative content.
no code implementations • 19 Nov 2015 • Bryan Hooi, Neil Shah, Alex Beutel, Stephan Gunnemann, Leman Akoglu, Mohit Kumar, Disha Makhija, Christos Faloutsos
To combine these 2 approaches, we formulate our Bayesian Inference for Rating Data (BIRD) model, a flexible Bayesian model of user rating behavior.
no code implementations • 15 Oct 2014 • Neil Shah, Alex Beutel, Brian Gallagher, Christos Faloutsos
How can we detect suspicious users in large online networks?