no code implementations • 27 Sep 2023 • Yijun Tian, Huan Song, Zichen Wang, Haozhu Wang, Ziqing Hu, Fang Wang, Nitesh V. Chawla, Panpan Xu
While existing work has explored utilizing knowledge graphs to enhance language modeling via joint training and customized model architectures, applying this to LLMs is problematic owing to their large number of parameters and high computational cost.
no code implementations • 2 Sep 2023 • Kaiwen Dong, Zhichun Guo, Nitesh V. Chawla
This discrepancy stems from a fundamental limitation: while MPNNs excel in node-level representation, they stumble with encoding the joint structural features essential to link prediction, like CN.
no code implementations • 31 May 2023 • Jennifer J. Schnur, Nitesh V. Chawla
This tutorial paper provides a general overview of symbolic regression (SR) with specific focus on standards of interpretability.
1 code implementation • 27 May 2023 • Taicheng Guo, Kehan Guo, Bozhao Nan, Zhenwen Liang, Zhichun Guo, Nitesh V. Chawla, Olaf Wiest, Xiangliang Zhang
In this paper, rather than pursuing state-of-the-art performance, we aim to evaluate capabilities of LLMs in a wide range of tasks across the chemistry domain.
1 code implementation • 12 Apr 2023 • Damien A. Dablain, Nitesh V. Chawla
Data augmentation forms the cornerstone of many modern machine learning training pipelines; yet, the mechanisms by which it works are not clearly understood.
1 code implementation • 9 Apr 2023 • Yihong Ma, Yijun Tian, Nuno Moniz, Nitesh V. Chawla
Concerning the latter, we critically analyze recent work in CILG and discuss urgent lines of inquiry within the topic.
1 code implementation • 2 Feb 2023 • Mai Anh Vu, Thu Nguyen, Tu T. Do, Nhan Phan, Nitesh V. Chawla, Pål Halvorsen, Michael A. Riegler, Binh T. Nguyen
Missing data frequently occurs in datasets across various domains, such as medicine, sports, and finance.
no code implementations • 1 Feb 2023 • Yijun Tian, Shichao Pei, Xiangliang Zhang, Chuxu Zhang, Nitesh V. Chawla
Therefore, to improve the applicability of GNNs and fully encode the complicated topological information, knowledge distillation on graphs (KDG) has been introduced to build a smaller yet effective model and exploit more knowledge from data, leading to model compression and performance improvement.
1 code implementation • 15 Dec 2022 • Damien A. Dablain, Colin Bellinger, Bartosz Krawczyk, David W. Aha, Nitesh V. Chawla
We propose a set of techniques that can be used by both deep learning model users to identify, visualize and understand class prototypes, sub-concepts and outlier instances; and by imbalanced learning algorithm developers to detect features and class exemplars that are key to model performance.
1 code implementation • 29 Nov 2022 • Kaiwen Dong, Yijun Tian, Zhichun Guo, Yang Yang, Nitesh V. Chawla
In this paper, we first identify the dataset shift problem in the link prediction task and provide theoretical analyses on how existing link prediction methods are vulnerable to it.
no code implementations • 11 Oct 2022 • Zhichun Guo, William Shiao, Shichang Zhang, Yozen Liu, Nitesh V. Chawla, Neil Shah, Tong Zhao
In this work, to combine the advantages of GNNs and MLPs, we start with exploring direct knowledge distillation (KD) methods for link prediction, i. e., predicted logit-based matching and node representation-based matching.
1 code implementation • 22 Aug 2022 • Yijun Tian, Chuxu Zhang, Zhichun Guo, Xiangliang Zhang, Nitesh V. Chawla
Existing methods attempt to address this scalability issue by training multi-layer perceptrons (MLPs) exclusively on node content features using labels derived from trained GNNs.
1 code implementation • 21 Aug 2022 • Yijun Tian, Kaiwen Dong, Chunhui Zhang, Chuxu Zhang, Nitesh V. Chawla
In light of this, we study the problem of generative SSL on heterogeneous graphs and propose HGMAE, a novel heterogeneous graph masked autoencoder model to address these challenges.
1 code implementation • 8 Jul 2022 • Zhichun Guo, Kehan Guo, Bozhao Nan, Yijun Tian, Roshni G. Iyer, Yihong Ma, Olaf Wiest, Xiangliang Zhang, Wei Wang, Chuxu Zhang, Nitesh V. Chawla
Recently, MRL has achieved considerable progress, especially in methods based on deep molecular graph learning.
1 code implementation • 27 May 2022 • Steven J. Krieg, William C. Burgis, Patrick M. Soga, Nitesh V. Chawla
Graph neural networks (GNNs) continue to achieve state-of-the-art performance on many graph learning tasks, but rely on the assumption that a given graph is a sufficient approximation of the true neighborhood structure.
1 code implementation • 24 May 2022 • Yijun Tian, Chuxu Zhang, Zhichun Guo, Yihong Ma, Ronald Metoyer, Nitesh V. Chawla
Learning effective recipe representations is essential in food studies.
no code implementations • 24 May 2022 • Yijun Tian, Chuxu Zhang, Zhichun Guo, Chao Huang, Ronald Metoyer, Nitesh V. Chawla
We then propose RecipeRec, a novel heterogeneous graph learning model for recipe recommendation.
no code implementations • 17 Mar 2022 • Chuxu Zhang, Kaize Ding, Jundong Li, Xiangliang Zhang, Yanfang Ye, Nitesh V. Chawla, Huan Liu
In light of this, few-shot learning on graphs (FSLG), which combines the strengths of graph representation learning and few-shot learning together, has been proposed to tackle the performance degradation in face of limited annotated data challenge.
no code implementations • 12 Jan 2022 • Steven J. Krieg, Christian W. Smith, Rusha Chatterjee, Nitesh V. Chawla
From a machine learning perspective, we found that the Random Forest model outperformed several deep models on our multimodal, noisy, and imbalanced data set, thus demonstrating the efficacy of our novel feature representation method in such a context.
1 code implementation • 4 Jun 2021 • Piotr Bielak, Tomasz Kajdanowicz, Nitesh V. Chawla
The self-supervised learning (SSL) paradigm is an essential exploration area, which tries to eliminate the need for expensive data labeling.
1 code implementation • 5 May 2021 • Damien Dablain, Bartosz Krawczyk, Nitesh V. Chawla
An important advantage of DeepSMOTE over GAN-based oversampling is that DeepSMOTE does not require a discriminator, and it generates high-quality artificial images that are both information-rich and suitable for visual inspection.
1 code implementation • 16 Feb 2021 • Zhichun Guo, Chuxu Zhang, Wenhao Yu, John Herr, Olaf Wiest, Meng Jiang, Nitesh V. Chawla
The recent success of graph neural networks has significantly boosted molecular property prediction, advancing activities such as drug discovery.
Ranked #1 on
Molecular Property Prediction (1-shot))
on Tox21
no code implementations • 29 Dec 2020 • Piotr Bielak, Tomasz Kajdanowicz, Nitesh V. Chawla
Representation learning has overcome the often arduous and manual featurization of networks through (unsupervised) feature learning as it results in embeddings that can apply to a variety of downstream learning tasks.
1 code implementation • 25 Jul 2020 • Daheng Wang, Zhihan Zhang, Yihong Ma, Tong Zhao, Tianwen Jiang, Nitesh V. Chawla, Meng Jiang
In this work, we present a novel framework called CoEvoGNN for modeling dynamic attributed graph sequence.
no code implementations • 17 Jun 2020 • Tianwen Jiang, Tong Zhao, Bing Qin, Ting Liu, Nitesh V. Chawla, Meng Jiang
Noun phrases and relational phrases in Open Knowledge Bases are often not canonical, leading to redundant and ambiguous facts.
1 code implementation • 11 Jun 2020 • Daheng Wang, Meng Jiang, Munira Syed, Oliver Conway, Vishal Juneja, Sriram Subramanian, Nitesh V. Chawla
The user embeddings preserve spatial patterns and temporal patterns of a variety of periodicity (e. g., hourly, weekly, and weekday patterns).
no code implementations • 10 Jun 2020 • Pablo Robles-Granda, Suwen Lin, Xian Wu, Sidney D'Mello, Gonzalo J. Martinez, Koustuv Saha, Kari Nies, Gloria Mark, Andrew T. Campbell, Munmun De Choudhury, Anind D. Dey, Julie Gregg, Ted Grover, Stephen M. Mattingly, Shayan Mirjafari, Edward Moskal, Aaron Striegel, Nitesh V. Chawla
In this paper, we create a benchmark for predictive analysis of individuals from a perspective that integrates: physical and physiological behavior, psychological states and traits, and job performance.
1 code implementation • 26 Nov 2019 • Chuxu Zhang, Huaxiu Yao, Chao Huang, Meng Jiang, Zhenhui Li, Nitesh V. Chawla
Knowledge graphs (KGs) serve as useful resources for various natural language processing applications.
1 code implementation • 7 Oct 2019 • Huaxiu Yao, Chuxu Zhang, Ying WEI, Meng Jiang, Suhang Wang, Junzhou Huang, Nitesh V. Chawla, Zhenhui Li
Towards the challenging problem of semi-supervised node classification, there have been extensive studies.
no code implementations • 15 Aug 2019 • Mandana Saebi, Giovanni Luca Ciampaglia, Lance M. Kaplan, Nitesh V. Chawla
Representation learning on networks offers a powerful alternative to the oft painstaking process of manual feature engineering, and as a result, has enjoyed considerable success in recent years.
no code implementations • 26 Jun 2019 • Tianwen Jiang, Tong Zhao, Bing Qin, Ting Liu, Nitesh V. Chawla, Meng Jiang
Conditions are essential in the statements of biological literature.
1 code implementation • 6 Apr 2019 • Piotr Bielak, Kamil Tagowski, Maciej Falkiewicz, Tomasz Kajdanowicz, Nitesh V. Chawla
Experimental results on several downstream tasks, over seven real-world data sets, show that FILDNE is able to reduce memory and computational time costs while providing competitive quality measure gains with respect to the contemporary methods for representation learning on dynamic graphs.
5 code implementations • 20 Nov 2018 • Chuxu Zhang, Dongjin Song, Yuncong Chen, Xinyang Feng, Cristian Lumezanu, Wei Cheng, Jingchao Ni, Bo Zong, Haifeng Chen, Nitesh V. Chawla
Subsequently, given the signature matrices, a convolutional encoder is employed to encode the inter-sensor (time series) correlations and an attention based Convolutional Long-Short Term Memory (ConvLSTM) network is developed to capture the temporal patterns.
1 code implementation • 27 Dec 2017 • Jian Xu, Mandana Saebi, Bruno Ribeiro, Lance M. Kaplan, Nitesh V. Chawla
A major branch of anomaly detection methods relies on dynamic networks: raw sequence data is first converted to a series of networks, then critical change points are identified in the evolving network structure.
Social and Information Networks Physics and Society
no code implementations • 1 Jun 2017 • Keith Feldman, Louis Faust, Xian Wu, Chao Huang, Nitesh V. Chawla
From medical charts to national census, healthcare has traditionally operated under a paper-based paradigm.
2 code implementations • 15 Dec 2014 • Yuxiao Dong, Reid A. Johnson, Nitesh V. Chawla
The effectiveness of such predictions, however, is fundamentally limited by the power-law distribution of citations, whereby publications with few citations are extremely common and publications with many citations are relatively rare.
Social and Information Networks Digital Libraries Physics and Society H.2.8; H.3.7
no code implementations • 20 May 2014 • Everaldo Aguiar, Saurabh Nagrecha, Nitesh V. Chawla
In the nascent days of e-content delivery, having a superior product was enough to give companies an edge against the competition.