Search Results for author: Nitesh V. Chawla

Found 51 papers, 28 papers with code

Predicting Online Video Engagement Using Clickstreams

no code implementations20 May 2014 Everaldo Aguiar, Saurabh Nagrecha, Nitesh V. Chawla

In the nascent days of e-content delivery, having a superior product was enough to give companies an edge against the competition.

Will This Paper Increase Your h-index? Scientific Impact Prediction

2 code implementations15 Dec 2014 Yuxiao Dong, Reid A. Johnson, Nitesh V. Chawla

The effectiveness of such predictions, however, is fundamentally limited by the power-law distribution of citations, whereby publications with few citations are extremely common and publications with many citations are relatively rare.

Social and Information Networks Digital Libraries Physics and Society H.2.8; H.3.7

Detecting Anomalies in Sequential Data with Higher-order Networks

1 code implementation27 Dec 2017 Jian Xu, Mandana Saebi, Bruno Ribeiro, Lance M. Kaplan, Nitesh V. Chawla

A major branch of anomaly detection methods relies on dynamic networks: raw sequence data is first converted to a series of networks, then critical change points are identified in the evolving network structure.

Social and Information Networks Physics and Society

A Deep Neural Network for Unsupervised Anomaly Detection and Diagnosis in Multivariate Time Series Data

5 code implementations20 Nov 2018 Chuxu Zhang, Dongjin Song, Yuncong Chen, Xinyang Feng, Cristian Lumezanu, Wei Cheng, Jingchao Ni, Bo Zong, Haifeng Chen, Nitesh V. Chawla

Subsequently, given the signature matrices, a convolutional encoder is employed to encode the inter-sensor (time series) correlations and an attention based Convolutional Long-Short Term Memory (ConvLSTM) network is developed to capture the temporal patterns.

Time Series Time Series Anomaly Detection +1

FILDNE: A Framework for Incremental Learning of Dynamic Networks Embeddings

1 code implementation6 Apr 2019 Piotr Bielak, Kamil Tagowski, Maciej Falkiewicz, Tomasz Kajdanowicz, Nitesh V. Chawla

Experimental results on several downstream tasks, over seven real-world data sets, show that FILDNE is able to reduce memory and computational time costs while providing competitive quality measure gains with respect to the contemporary methods for representation learning on dynamic graphs.

Dynamic graph embedding Incremental Learning +2

HONEM: Learning Embedding for Higher Order Networks

no code implementations15 Aug 2019 Mandana Saebi, Giovanni Luca Ciampaglia, Lance M. Kaplan, Nitesh V. Chawla

Representation learning on networks offers a powerful alternative to the oft painstaking process of manual feature engineering, and as a result, has enjoyed considerable success in recent years.

Feature Engineering Link Prediction +3

Few-Shot Knowledge Graph Completion

1 code implementation26 Nov 2019 Chuxu Zhang, Huaxiu Yao, Chao Huang, Meng Jiang, Zhenhui Li, Nitesh V. Chawla

Knowledge graphs (KGs) serve as useful resources for various natural language processing applications.

One-Shot Learning Relation

Jointly Predicting Job Performance, Personality, Cognitive Ability, Affect, and Well-Being

no code implementations10 Jun 2020 Pablo Robles-Granda, Suwen Lin, Xian Wu, Sidney D'Mello, Gonzalo J. Martinez, Koustuv Saha, Kari Nies, Gloria Mark, Andrew T. Campbell, Munmun De Choudhury, Anind D. Dey, Julie Gregg, Ted Grover, Stephen M. Mattingly, Shayan Mirjafari, Edward Moskal, Aaron Striegel, Nitesh V. Chawla

In this paper, we create a benchmark for predictive analysis of individuals from a perspective that integrates: physical and physiological behavior, psychological states and traits, and job performance.

Calendar Graph Neural Networks for Modeling Time Structures in Spatiotemporal User Behaviors

1 code implementation11 Jun 2020 Daheng Wang, Meng Jiang, Munira Syed, Oliver Conway, Vishal Juneja, Sriram Subramanian, Nitesh V. Chawla

The user embeddings preserve spatial patterns and temporal patterns of a variety of periodicity (e. g., hourly, weekly, and weekday patterns).

Attribute

AttrE2vec: Unsupervised Attributed Edge Representation Learning

no code implementations29 Dec 2020 Piotr Bielak, Tomasz Kajdanowicz, Nitesh V. Chawla

Representation learning has overcome the often arduous and manual featurization of networks through (unsupervised) feature learning as it results in embeddings that can apply to a variety of downstream learning tasks.

Clustering Edge Classification +1

Few-Shot Graph Learning for Molecular Property Prediction

1 code implementation16 Feb 2021 Zhichun Guo, Chuxu Zhang, Wenhao Yu, John Herr, Olaf Wiest, Meng Jiang, Nitesh V. Chawla

The recent success of graph neural networks has significantly boosted molecular property prediction, advancing activities such as drug discovery.

Attribute Drug Discovery +7

DeepSMOTE: Fusing Deep Learning and SMOTE for Imbalanced Data

1 code implementation5 May 2021 Damien Dablain, Bartosz Krawczyk, Nitesh V. Chawla

An important advantage of DeepSMOTE over GAN-based oversampling is that DeepSMOTE does not require a discriminator, and it generates high-quality artificial images that are both information-rich and suitable for visual inspection.

Graph Barlow Twins: A self-supervised representation learning framework for graphs

1 code implementation4 Jun 2021 Piotr Bielak, Tomasz Kajdanowicz, Nitesh V. Chawla

The self-supervised learning (SSL) paradigm is an essential exploration area, which tries to eliminate the need for expensive data labeling.

Contrastive Learning Graph Representation Learning +1

Predicting Terrorist Attacks in the United States using Localized News Data

no code implementations12 Jan 2022 Steven J. Krieg, Christian W. Smith, Rusha Chatterjee, Nitesh V. Chawla

From a machine learning perspective, we found that the Random Forest model outperformed several deep models on our multimodal, noisy, and imbalanced data set, thus demonstrating the efficacy of our novel feature representation method in such a context.

BIG-bench Machine Learning

Few-Shot Learning on Graphs

no code implementations17 Mar 2022 Chuxu Zhang, Kaize Ding, Jundong Li, Xiangliang Zhang, Yanfang Ye, Nitesh V. Chawla, Huan Liu

In light of this, few-shot learning on graphs (FSLG), which combines the strengths of graph representation learning and few-shot learning together, has been proposed to tackle the performance degradation in face of limited annotated data challenge.

Few-Shot Learning Graph Mining +1

Deep Ensembles for Graphs with Higher-order Dependencies

1 code implementation27 May 2022 Steven J. Krieg, William C. Burgis, Patrick M. Soga, Nitesh V. Chawla

Graph neural networks (GNNs) continue to achieve state-of-the-art performance on many graph learning tasks, but rely on the assumption that a given graph is a sufficient approximation of the true neighborhood structure.

Graph Learning

Heterogeneous Graph Masked Autoencoders

1 code implementation21 Aug 2022 Yijun Tian, Kaiwen Dong, Chunhui Zhang, Chuxu Zhang, Nitesh V. Chawla

In light of this, we study the problem of generative SSL on heterogeneous graphs and propose HGMAE, a novel heterogeneous graph masked autoencoder model to address these challenges.

Attribute Self-Supervised Learning

NOSMOG: Learning Noise-robust and Structure-aware MLPs on Graphs

1 code implementation22 Aug 2022 Yijun Tian, Chuxu Zhang, Zhichun Guo, Xiangliang Zhang, Nitesh V. Chawla

Existing methods attempt to address this scalability issue by training multi-layer perceptrons (MLPs) exclusively on node content features using labels derived from trained GNNs.

Linkless Link Prediction via Relational Distillation

no code implementations11 Oct 2022 Zhichun Guo, William Shiao, Shichang Zhang, Yozen Liu, Nitesh V. Chawla, Neil Shah, Tong Zhao

In this work, to combine the advantages of GNNs and MLPs, we start with exploring direct knowledge distillation (KD) methods for link prediction, i. e., predicted logit-based matching and node representation-based matching.

Knowledge Distillation Link Prediction +1

FakeEdge: Alleviate Dataset Shift in Link Prediction

1 code implementation29 Nov 2022 Kaiwen Dong, Yijun Tian, Zhichun Guo, Yang Yang, Nitesh V. Chawla

In this paper, we first identify the dataset shift problem in the link prediction task and provide theoretical analyses on how existing link prediction methods are vulnerable to it.

Link Prediction

Interpretable ML for Imbalanced Data

1 code implementation15 Dec 2022 Damien A. Dablain, Colin Bellinger, Bartosz Krawczyk, David W. Aha, Nitesh V. Chawla

We propose a set of techniques that can be used by both deep learning model users to identify, visualize and understand class prototypes, sub-concepts and outlier instances; and by imbalanced learning algorithm developers to detect features and class exemplars that are key to model performance.

Autonomous Driving Binary Classification +2

Knowledge Distillation on Graphs: A Survey

no code implementations1 Feb 2023 Yijun Tian, Shichao Pei, Xiangliang Zhang, Chuxu Zhang, Nitesh V. Chawla

Therefore, to improve the applicability of GNNs and fully encode the complicated topological information, knowledge distillation on graphs (KDG) has been introduced to build a smaller yet effective model and exploit more knowledge from data, leading to model compression and performance improvement.

Knowledge Distillation Model Compression

Class-Imbalanced Learning on Graphs: A Survey

1 code implementation9 Apr 2023 Yihong Ma, Yijun Tian, Nuno Moniz, Nitesh V. Chawla

Concerning the latter, we critically analyze recent work in CILG and discuss urgent lines of inquiry within the topic.

Graph Representation Learning

Towards Understanding How Data Augmentation Works with Imbalanced Data

1 code implementation12 Apr 2023 Damien A. Dablain, Nitesh V. Chawla

Data augmentation forms the cornerstone of many modern machine learning training pipelines; yet, the mechanisms by which it works are not clearly understood.

Data Augmentation feature selection

What can Large Language Models do in chemistry? A comprehensive benchmark on eight tasks

1 code implementation NeurIPS 2023 Taicheng Guo, Kehan Guo, Bozhao Nan, Zhenwen Liang, Zhichun Guo, Nitesh V. Chawla, Olaf Wiest, Xiangliang Zhang

In this paper, rather than pursuing state-of-the-art performance, we aim to evaluate capabilities of LLMs in a wide range of tasks across the chemistry domain.

In-Context Learning

Information Fusion via Symbolic Regression: A Tutorial in the Context of Human Health

no code implementations31 May 2023 Jennifer J. Schnur, Nitesh V. Chawla

This tutorial paper provides a general overview of symbolic regression (SR) with specific focus on standards of interpretability.

Nutrition regression +1

Pure Message Passing Can Estimate Common Neighbor for Link Prediction

1 code implementation2 Sep 2023 Kaiwen Dong, Zhichun Guo, Nitesh V. Chawla

This discrepancy stems from a fundamental limitation: while MPNNs excel in node-level representation, they stumble with encoding the joint structural features essential to link prediction, like CN.

Graph Representation Learning Link Prediction

Graph Neural Prompting with Large Language Models

1 code implementation27 Sep 2023 Yijun Tian, Huan Song, Zichen Wang, Haozhu Wang, Ziqing Hu, Fang Wang, Nitesh V. Chawla, Panpan Xu

While existing work has explored utilizing knowledge graphs (KGs) to enhance language modeling via joint training and customized model architectures, applying this to LLMs is problematic owing to their large number of parameters and high computational cost.

Knowledge Graphs Language Modelling +2

Modeling non-uniform uncertainty in Reaction Prediction via Boosting and Dropout

no code implementations7 Oct 2023 Taicheng Guo, Changsheng Ma, Xiuying Chen, Bozhao Nan, Kehan Guo, Shichao Pei, Nitesh V. Chawla, Olaf Wiest, Xiangliang Zhang

With the widespread adoption of generative models, the Variational Autoencoder(VAE) framework has typically been employed to tackle challenges in reaction prediction, where the reactants are encoded as a condition for the decoder, which then generates the product.

HetGPT: Harnessing the Power of Prompt Tuning in Pre-Trained Heterogeneous Graph Neural Networks

no code implementations23 Oct 2023 Yihong Ma, Ning Yan, Jiayu Li, Masood Mortazavi, Nitesh V. Chawla

The surge in prompt-based learning within Natural Language Processing (NLP) suggests the potential of adapting a "pre-train, prompt" paradigm to graphs as an alternative.

Node Classification

Representing Outcome-driven Higher-order Dependencies in Graphs of Disease Trajectories

1 code implementation23 Dec 2023 Steven J. Krieg, Nitesh V. Chawla, Keith Feldman

The widespread application of machine learning techniques to biomedical data has produced many new insights into disease progression and improving clinical care.

Large Language Model based Multi-Agents: A Survey of Progress and Challenges

1 code implementation21 Jan 2024 Taicheng Guo, Xiuying Chen, Yaqi Wang, Ruidi Chang, Shichao Pei, Nitesh V. Chawla, Olaf Wiest, Xiangliang Zhang

To provide the community with an overview of this dynamic field, we present this survey to offer an in-depth discussion on the essential aspects of multi-agent systems based on LLMs, as well as the challenges.

Decision Making Language Modelling +1

Are we making much progress? Revisiting chemical reaction yield prediction from an imbalanced regression perspective

no code implementations6 Feb 2024 Yihong Ma, Xiaobao Huang, Bozhao Nan, Nuno Moniz, Xiangliang Zhang, Olaf Wiest, Nitesh V. Chawla

The yield of a chemical reaction quantifies the percentage of the target product formed in relation to the reactants consumed during the chemical reaction.

TinyLLM: Learning a Small Student from Multiple Large Language Models

no code implementations7 Feb 2024 Yijun Tian, Yikun Han, Xiusi Chen, Wei Wang, Nitesh V. Chawla

To solve the problems and facilitate the learning of compact language models, we propose TinyLLM, a new knowledge distillation paradigm to learn a small student LLM from multiple large teacher LLMs.

Knowledge Distillation

Universal Link Predictor By In-Context Learning on Graphs

no code implementations12 Feb 2024 Kaiwen Dong, Haitao Mao, Zhichun Guo, Nitesh V. Chawla

In this work, we introduce the Universal Link Predictor (UniLP), a novel model that combines the generalizability of heuristic approaches with the pattern learning capabilities of parametric models.

Hyperparameter Optimization In-Context Learning +1

G-Retriever: Retrieval-Augmented Generation for Textual Graph Understanding and Question Answering

1 code implementation12 Feb 2024 Xiaoxin He, Yijun Tian, Yifei Sun, Nitesh V. Chawla, Thomas Laurent, Yann Lecun, Xavier Bresson, Bryan Hooi

Given a graph with textual attributes, we enable users to `chat with their graph': that is, to ask questions about the graph using a conversational interface.

Common Sense Reasoning Graph Classification +4

UGMAE: A Unified Framework for Graph Masked Autoencoders

no code implementations12 Feb 2024 Yijun Tian, Chuxu Zhang, Ziyi Kou, Zheyuan Liu, Xiangliang Zhang, Nitesh V. Chawla

In light of this, we propose UGMAE, a unified framework for graph masked autoencoders to address these issues from the perspectives of adaptivity, integrity, complementarity, and consistency.

Self-Supervised Learning

Can we Soft Prompt LLMs for Graph Learning Tasks?

no code implementations15 Feb 2024 Zheyuan Liu, Xiaoxin He, Yijun Tian, Nitesh V. Chawla

Graph plays an important role in representing complex relationships in real-world applications such as social networks, biological data and citation networks.

Graph Learning Link Prediction +1

Node Duplication Improves Cold-start Link Prediction

no code implementations15 Feb 2024 Zhichun Guo, Tong Zhao, Yozen Liu, Kaiwen Dong, William Shiao, Neil Shah, Nitesh V. Chawla

Graph Neural Networks (GNNs) are prominent in graph machine learning and have shown state-of-the-art performance in Link Prediction (LP) tasks.

Link Prediction Recommendation Systems

CORE: Data Augmentation for Link Prediction via Information Bottleneck

no code implementations17 Apr 2024 Kaiwen Dong, Zhichun Guo, Nitesh V. Chawla

Link prediction (LP) is a fundamental task in graph representation learning, with numerous applications in diverse domains.

Data Augmentation Graph Representation Learning +1

You do not have to train Graph Neural Networks at all on text-attributed graphs

no code implementations17 Apr 2024 Kaiwen Dong, Zhichun Guo, Nitesh V. Chawla

Graph structured data, specifically text-attributed graphs (TAG), effectively represent relationships among varied entities.

Attribute Classification +2

Cannot find the paper you are looking for? You can Submit a new open access paper.