no code implementations • 26 Feb 2025 • Jie He, Jennifer Neville, Mengting Wan, Longqi Yang, Hui Liu, Xiaofeng Xu, Xia Song, Jeff Z. Pan, Pei Zhou
Large Language Models (LLMs) can enhance their capabilities as AI assistants by integrating external tools, allowing them to access a wider range of information.
no code implementations • 1 Nov 2024 • Ying-Chun Lin, Jennifer Neville, Cassiano Becker, Purvanshi Metha, Nabiha Asghar, Vipul Agarwal
Understanding node representations in graph-based models is crucial for uncovering biases , diagnosing errors, and building trust in model decisions.
no code implementations • 4 Oct 2024 • Ying-Chun Lin, Jennifer Neville
To bridge this gap, we introduce Target-Aware Contrastive Learning (Target-aware CL) which aims to enhance target task performance by maximizing the mutual information between the target task and node representations with a self-supervised learning process.
no code implementations • 28 Aug 2024 • Taiwei Shi, Zhuoer Wang, Longqi Yang, Ying-Chun Lin, Zexue He, Mengting Wan, Pei Zhou, Sujay Jauhar, Xiaofeng Xu, Xia Song, Jennifer Neville
As large language models (LLMs) continue to advance, aligning these models with human preferences has emerged as a critical challenge.
no code implementations • 22 Jul 2024 • Jiaxing Zhang, Jiayi Liu, Dongsheng Luo, Jennifer Neville, Hua Wei
To solve this problem, we embed a Large Language Model (LLM) as knowledge into the GNN explanation network to avoid the learning bias problem.
no code implementations • 1 Jun 2024 • Christine Herlihy, Jennifer Neville, Tobias Schnabel, Adith Swaminathan
We explore the use of Large Language Model (LLM-based) chatbots to power recommender systems.
1 code implementation • 13 May 2024 • Qi Chen, Xiubo Geng, Corby Rosset, Carolyn Buractaon, Jingwen Lu, Tao Shen, Kun Zhou, Chenyan Xiong, Yeyun Gong, Paul Bennett, Nick Craswell, Xing Xie, Fan Yang, Bryan Tower, Nikhil Rao, Anlei Dong, Wenqi Jiang, Zheng Liu, Mingqin Li, Chuanjie Liu, Zengzhong Li, Rangan Majumder, Jennifer Neville, Andy Oakley, Knut Magne Risvik, Harsha Vardhan Simhadri, Manik Varma, Yujing Wang, Linjun Yang, Mao Yang, Ce Zhang
Recent breakthroughs in large models have highlighted the critical significance of data scale, labels and modals.
no code implementations • 15 Apr 2024 • Tvrtko Tadić, Cassiano Becker, Jennifer Neville
Random Projections have been widely used to generate embeddings for various graph learning tasks due to their computational efficiency.
1 code implementation • 2 Apr 2024 • Tobias Schnabel, Jennifer Neville
In many modern LLM applications, such as retrieval augmented generation, prompts have become programs themselves.
no code implementations • 19 Mar 2024 • Ying-Chun Lin, Jennifer Neville, Jack W. Stokes, Longqi Yang, Tara Safavi, Mengting Wan, Scott Counts, Siddharth Suri, Reid Andersen, Xiaofeng Xu, Deepak Gupta, Sujay Kumar Jauhar, Xia Song, Georg Buscher, Saurabh Tiwary, Brent Hecht, Jaime Teevan
Accurate and interpretable user satisfaction estimation (USE) is critical for understanding, evaluating, and continuously improving conversational systems.
no code implementations • 19 Mar 2024 • Siddharth Suri, Scott Counts, Leijie Wang, Chacha Chen, Mengting Wan, Tara Safavi, Jennifer Neville, Chirag Shah, Ryen W. White, Reid Andersen, Georg Buscher, Sathish Manivannan, Nagu Rangan, Longqi Yang
Until recently, search engines were the predominant method for people to access online information.
no code implementations • 18 Mar 2024 • Mengting Wan, Tara Safavi, Sujay Kumar Jauhar, Yujin Kim, Scott Counts, Jennifer Neville, Siddharth Suri, Chirag Shah, Ryen W White, Longqi Yang, Reid Andersen, Georg Buscher, Dhruv Joshi, Nagu Rangan
Transforming unstructured text into structured and meaningful forms, organized by useful category labels, is a fundamental step in text mining for downstream analysis and application.
no code implementations • 27 Feb 2024 • Corby Rosset, Ho-Lam Chung, Guanghui Qin, Ethan C. Chau, Zhuo Feng, Ahmed Awadallah, Jennifer Neville, Nikhil Rao
We show that users spend a lot of ``effort'' on these questions in terms of signals like clicks and session length, and that they are also challenging for GPT-4.
no code implementations • 17 Feb 2024 • Jiayi Liu, Tinghan Yang, Jennifer Neville
Our experiments explore the performance of CliqueParcel, including efficiency, faithfulness, and the trade-off between them.
no code implementations • 15 Nov 2023 • Sheshera Mysore, Zhuoran Lu, Mengting Wan, Longqi Yang, Bahareh Sarrafzadeh, Steve Menezes, Tina Baghaee, Emmanuel Barajas Gonzalez, Jennifer Neville, Tara Safavi
Powerful large language models have facilitated the development of writing assistants that promise to significantly improve the quality and efficiency of composition and communication.
1 code implementation • 3 Oct 2023 • Canwen Xu, Corby Rosset, Ethan C. Chau, Luciano del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao
Remarkably, our automatic contrastive post-training further improves the performance of Orca, already a state-of-the-art instruction learning model tuned with GPT-4 outputs, to outperform ChatGPT.
no code implementations • 16 Sep 2023 • Sarkar Snigdha Sarathi Das, Chirag Shah, Mengting Wan, Jennifer Neville, Longqi Yang, Reid Andersen, Georg Buscher, Tara Safavi
The traditional Dialogue State Tracking (DST) problem aims to track user preferences and intents in user-agent conversations.
no code implementations • 14 Sep 2023 • Chirag Shah, Ryen W. White, Reid Andersen, Georg Buscher, Scott Counts, Sarkar Snigdha Sarathi Das, Ali Montazer, Sathish Manivannan, Jennifer Neville, Xiaochuan Ni, Nagu Rangan, Tara Safavi, Siddharth Suri, Mengting Wan, Leijie Wang, Longqi Yang
However, using LLMs to generate a user intent taxonomy and apply it for log analysis can be problematic for two main reasons: (1) such a taxonomy is not externally validated; and (2) there may be an undesirable feedback loop.
1 code implementation • 12 Aug 2023 • Jiayi Liu, Jennifer Neville
Email platforms need to generate personalized rankings of emails that satisfy user preferences, which may vary over time.
1 code implementation • 1 Aug 2023 • Giselle Zeno, Timothy La Fond, Jennifer Neville
To address these issues, in this work we propose DYnamic MOtif-NoDes (DYMOND) -- a generative model that considers (i) the dynamic changes in overall graph structure using temporal motif activity and (ii) the roles nodes play in motifs (e. g., one node plays the hub role in a wedge, while the remaining two act as spokes).
no code implementations • 24 Apr 2023 • Hogun Park, Jennifer Neville
Experiments show that the ranking of nodes by scores using GRAPH-wGD is highly correlated with true bridgeness scores.
no code implementations • 17 Feb 2023 • Anton Amirov, Chris Quirk, Jennifer Neville
We investigate graph representation learning approaches that enable models to generalize across graphs: given a model trained using the representations from one graph, our goal is to apply inference using those same model parameters when given representations computed over a new graph, unseen during model training, with minimal degradation in inference accuracy.
no code implementations • 27 Oct 2022 • Susheel Suresh, Danny Godbout, Arko Mukherjee, Mayank Shrivastava, Jennifer Neville, Pan Li
1. 7% gains compared to individual client specific self-supervised training and (2) we construct and introduce a new cross-silo dataset called Amazon Co-purchase Networks that have both the characteristics of the motivated problem setting.
1 code implementation • 13 Jul 2022 • Sean R. Sinclair, Felipe Frujeri, Ching-An Cheng, Luke Marshall, Hugo Barbalho, Jingling Li, Jennifer Neville, Ishai Menache, Adith Swaminathan
Many resource management problems require sequential decision-making under uncertainty, where the only uncertainty affecting the decision outcomes are exogenous variables outside the control of the decision-maker.
no code implementations • 4 Feb 2022 • Mengyue Hang, Tobias Schnabel, Longqi Yang, Jennifer Neville
Most work in graph-based recommender systems considers a {\em static} setting where all information about test nodes (i. e., users and items) is available upfront at training time.
1 code implementation • 11 Jun 2021 • Susheel Suresh, Vinith Budde, Jennifer Neville, Pan Li, Jianzhu Ma
We find that the prediction performance of a wide range of GNN models is highly correlated with the node level assortativity.
Graph Learning
Node Classification on Non-Homophilic (Heterophilic) Graphs
1 code implementation • NeurIPS 2021 • Susheel Suresh, Pan Li, Cong Hao, Jennifer Neville
Self-supervised learning of graph neural networks (GNN) is in great need because of the widespread label scarcity issue in real-world graph/network data.
1 code implementation • 6 Mar 2021 • Changping Meng, Muhao Chen, Jie Mao, Jennifer Neville
Analyzing the readability of articles has been an important sociolinguistic task.
no code implementations • 22 Sep 2020 • Susheel Suresh, Jennifer Neville
Our method uses a cross feedback paradigm wherein, an embedding model is used to guide the search of a rule mining system to mine rules and infer new facts.
no code implementations • 26 Mar 2020 • Mengyue Hang, Jennifer Neville, Bruno Ribeiro
Graph Neural Networks (GNNs) have recently been used for node and graph classification tasks with great success, but GNNs model dependencies among the attributes of nearby neighboring nodes rather than dependencies among observed node labels.
no code implementations • 2 Mar 2020 • Mahak Goindani, Jennifer Neville
Social Reinforcement Learning methods, which model agents in large networks, are useful for fake news mitigation, personalized teaching/healthcare, and viral marketing, but it is challenging to incorporate inter-agent dependencies into the models effectively due to network size and sparse interaction data.
1 code implementation • 1 Oct 2019 • S Chandra Mouli, Leonardo Teixeira, Jennifer Neville, Bruno Ribeiro
The goal of lifetime clustering is to develop an inductive model that maps subjects into $K$ clusters according to their underlying (unobserved) lifetime distribution.
1 code implementation • 4 Apr 2019 • Guilherme Gomes, Vinayak Rao, Jennifer Neville
Clustering and community detection with multiple graphs have typically focused on aligned graphs, where there is a mapping between nodes across the graphs (e. g., multi-view, multi-layer, temporal graphs).
no code implementations • 7 Sep 2018 • Guilherme Gomes, Vinayak Rao, Jennifer Neville
Current approaches to hypothesis testing for weighted networks typically requires thresholding the edge-weights, to transform the data to binary networks.
no code implementations • ICML 2018 • Jiasen Yang, Qiang Liu, Vinayak Rao, Jennifer Neville
Recent work has combined Stein’s method with reproducing kernel Hilbert space theory to develop nonparametric goodness-of-fit tests for un-normalized probability distributions.
no code implementations • ICLR 2018 • S Chandra Mouli, Bruno Ribeiro, Jennifer Neville
The goal of survival clustering is to map subjects (e. g., users in a social network, patients in a medical study) to $K$ clusters ranging from low-risk to high-risk.
no code implementations • 24 Jul 2017 • Jiasen Yang, Bruno Ribeiro, Jennifer Neville
Research in statistical relational learning has produced a number of methods for learning relational models from large-scale network data.
no code implementations • 2 Aug 2016 • Timothy La Fond, Jennifer Neville, Brian Gallagher
An important task in network analysis is the detection of anomalous events in a network time series.
no code implementations • 1 Jul 2016 • Iman Alodah, Jennifer Neville
Specifically, we propose a boosting algorithm for learning a collective inference model that predicts a continuous target variable.
no code implementations • 11 Jul 2015 • Pablo Robles-Granda, Sebastian Moreno, Jennifer Neville
Bayesian networks (BNs) are used for inference and sampling by exploiting conditional independence among random variables.
no code implementations • 13 Jun 2015 • Nesreen K. Ahmed, Jennifer Neville, Ryan A. Rossi, Nick Duffield, Theodore L. Willke
From social science to biology, numerous applications often rely on graphlets for intuitive and meaningful characterization of networks at both the global macro-level as well as the local micro-level.
no code implementations • 14 Mar 2014 • Nesreen K. Ahmed, Christopher Cole, Jennifer Neville
We use the two representations as inputs to a mixture model to learn the latent state transitions that correspond to important changes in the Email graph structure over time.