Search Results for author: Jennifer Neville

Found 36 papers, 10 papers with code

Node Similarities under Random Projections: Limits and Pathological Cases

no code implementations15 Apr 2024 Tvrtko Tadić, Cassiano Becker, Jennifer Neville

Random Projections have been widely used to generate embeddings for various graph tasks due to their computational efficiency.

Computational Efficiency LEMMA

Prompts As Programs: A Structure-Aware Approach to Efficient Compile-Time Prompt Optimization

1 code implementation2 Apr 2024 Tobias Schnabel, Jennifer Neville

We show that SAMMO generalizes previous methods and improves the performance of complex prompts on (1) instruction tuning, (2) RAG pipeline tuning, and (3) prompt compression, across several different LLMs.

TnT-LLM: Text Mining at Scale with Large Language Models

no code implementations18 Mar 2024 Mengting Wan, Tara Safavi, Sujay Kumar Jauhar, Yujin Kim, Scott Counts, Jennifer Neville, Siddharth Suri, Chirag Shah, Ryen W White, Longqi Yang, Reid Andersen, Georg Buscher, Dhruv Joshi, Nagu Rangan

Transforming unstructured text into structured and meaningful forms, organized by useful category labels, is a fundamental step in text mining for downstream analysis and application.

Researchy Questions: A Dataset of Multi-Perspective, Decompositional Questions for LLM Web Agents

no code implementations27 Feb 2024 Corby Rosset, Ho-Lam Chung, Guanghui Qin, Ethan C. Chau, Zhuo Feng, Ahmed Awadallah, Jennifer Neville, Nikhil Rao

We show that users spend a lot of ``effort'' on these questions in terms of signals like clicks and session length, and that they are also challenging for GPT-4.

Known Unknowns Question Answering +1

CliqueParcel: An Approach For Batching LLM Prompts That Jointly Optimizes Efficiency And Faithfulness

no code implementations17 Feb 2024 Jiayi Liu, Tinghan Yang, Jennifer Neville

Our experiments explore the performance of CliqueParcel, including efficiency, faithfulness, and the trade-off between them.

Question Answering Reading Comprehension

PEARL: Personalizing Large Language Model Writing Assistants with Generation-Calibrated Retrievers

no code implementations15 Nov 2023 Sheshera Mysore, Zhuoran Lu, Mengting Wan, Longqi Yang, Steve Menezes, Tina Baghaee, Emmanuel Barajas Gonzalez, Jennifer Neville, Tara Safavi

Powerful large language models have facilitated the development of writing assistants that promise to significantly improve the quality and efficiency of composition and communication.

Language Modelling Large Language Model +1

Automatic Pair Construction for Contrastive Post-training

1 code implementation3 Oct 2023 Canwen Xu, Corby Rosset, Ethan C. Chau, Luciano del Corro, Shweti Mahajan, Julian McAuley, Jennifer Neville, Ahmed Hassan Awadallah, Nikhil Rao

Remarkably, our automatic contrastive post-training further improves the performance of Orca, already a state-of-the-art instruction learning model tuned with GPT-4 outputs, to outperform ChatGPT.

Using Large Language Models to Generate, Validate, and Apply User Intent Taxonomies

no code implementations14 Sep 2023 Chirag Shah, Ryen W. White, Reid Andersen, Georg Buscher, Scott Counts, Sarkar Snigdha Sarathi Das, Ali Montazer, Sathish Manivannan, Jennifer Neville, Xiaochuan Ni, Nagu Rangan, Tara Safavi, Siddharth Suri, Mengting Wan, Leijie Wang, Longqi Yang

However, using LLMs to generate a user intent taxonomy and apply it for log analysis can be problematic for two main reasons: (1) such a taxonomy is not externally validated; and (2) there may be an undesirable feedback loop.

Stationary Algorithmic Balancing For Dynamic Email Re-Ranking Problem

1 code implementation12 Aug 2023 Jiayi Liu, Jennifer Neville

Email platforms need to generate personalized rankings of emails that satisfy user preferences, which may vary over time.

Re-Ranking

DYMOND: DYnamic MOtif-NoDes Network Generative Model

1 code implementation1 Aug 2023 Giselle Zeno, Timothy La Fond, Jennifer Neville

To address these issues, in this work we propose DYnamic MOtif-NoDes (DYMOND) -- a generative model that considers (i) the dynamic changes in overall graph structure using temporal motif activity and (ii) the roles nodes play in motifs (e. g., one node plays the hub role in a wedge, while the remaining two act as spokes).

Creating generalizable downstream graph models with random projections

no code implementations17 Feb 2023 Anton Amirov, Chris Quirk, Jennifer Neville

We investigate graph representation learning approaches that enable models to generalize across graphs: given a model trained using the representations from one graph, our goal is to apply inference using those same model parameters when given representations computed over a new graph, unseen during model training, with minimal degradation in inference accuracy.

Computational Efficiency Graph Representation Learning

Federated Graph Representation Learning using Self-Supervision

no code implementations27 Oct 2022 Susheel Suresh, Danny Godbout, Arko Mukherjee, Mayank Shrivastava, Jennifer Neville, Pan Li

1. 7% gains compared to individual client specific self-supervised training and (2) we construct and introduce a new cross-silo dataset called Amazon Co-purchase Networks that have both the characteristics of the motivated problem setting.

Federated Learning Graph Representation Learning

Hindsight Learning for MDPs with Exogenous Inputs

1 code implementation13 Jul 2022 Sean R. Sinclair, Felipe Frujeri, Ching-An Cheng, Luke Marshall, Hugo Barbalho, Jingling Li, Jennifer Neville, Ishai Menache, Adith Swaminathan

Many resource management problems require sequential decision-making under uncertainty, where the only uncertainty affecting the decision outcomes are exogenous variables outside the control of the decision-maker.

counterfactual Decision Making +3

Lightweight Compositional Embeddings for Incremental Streaming Recommendation

no code implementations4 Feb 2022 Mengyue Hang, Tobias Schnabel, Longqi Yang, Jennifer Neville

Most work in graph-based recommender systems considers a {\em static} setting where all information about test nodes (i. e., users and items) is available upfront at training time.

Recommendation Systems

Adversarial Graph Augmentation to Improve Graph Contrastive Learning

1 code implementation NeurIPS 2021 Susheel Suresh, Pan Li, Cong Hao, Jennifer Neville

Self-supervised learning of graph neural networks (GNN) is in great need because of the widespread label scarcity issue in real-world graph/network data.

Contrastive Learning Self-Supervised Learning

A Hybrid Model for Learning Embeddings and Logical Rules Simultaneously from Knowledge Graphs

no code implementations22 Sep 2020 Susheel Suresh, Jennifer Neville

Our method uses a cross feedback paradigm wherein, an embedding model is used to guide the search of a rule mining system to mine rules and infer new facts.

Knowledge Graph Embedding Knowledge Graphs

A Collective Learning Framework to Boost GNN Expressiveness

no code implementations26 Mar 2020 Mengyue Hang, Jennifer Neville, Bruno Ribeiro

Graph Neural Networks (GNNs) have recently been used for node and graph classification tasks with great success, but GNNs model dependencies among the attributes of nearby neighboring nodes rather than dependencies among observed node labels.

General Classification Graph Classification +2

Cluster-Based Social Reinforcement Learning

no code implementations2 Mar 2020 Mahak Goindani, Jennifer Neville

Social Reinforcement Learning methods, which model agents in large networks, are useful for fake news mitigation, personalized teaching/healthcare, and viral marketing, but it is challenging to incorporate inter-agent dependencies into the models effectively due to network size and sparse interaction data.

Clustering Marketing +2

Deep Lifetime Clustering

1 code implementation1 Oct 2019 S Chandra Mouli, Leonardo Teixeira, Jennifer Neville, Bruno Ribeiro

The goal of lifetime clustering is to develop an inductive model that maps subjects into $K$ clusters according to their underlying (unobserved) lifetime distribution.

Clustering

Community detection over a heterogeneous population of non-aligned networks

1 code implementation4 Apr 2019 Guilherme Gomes, Vinayak Rao, Jennifer Neville

Clustering and community detection with multiple graphs have typically focused on aligned graphs, where there is a mapping between nodes across the graphs (e. g., multi-view, multi-layer, temporal graphs).

Clustering Community Detection

Multi-level hypothesis testing for populations of heterogeneous networks

no code implementations7 Sep 2018 Guilherme Gomes, Vinayak Rao, Jennifer Neville

Current approaches to hypothesis testing for weighted networks typically requires thresholding the edge-weights, to transform the data to binary networks.

Anomaly Detection Two-sample testing

Goodness-of-fit Testing for Discrete Distributions via Stein Discrepancy

no code implementations ICML 2018 Jiasen Yang, Qiang Liu, Vinayak Rao, Jennifer Neville

Recent work has combined Stein’s method with reproducing kernel Hilbert space theory to develop nonparametric goodness-of-fit tests for un-normalized probability distributions.

A Deep Learning Approach for Survival Clustering without End-of-life Signals

no code implementations ICLR 2018 S Chandra Mouli, Bruno Ribeiro, Jennifer Neville

The goal of survival clustering is to map subjects (e. g., users in a social network, patients in a medical study) to $K$ clusters ranging from low-risk to high-risk.

Clustering

Stochastic Gradient Descent for Relational Logistic Regression via Partial Network Crawls

no code implementations24 Jul 2017 Jiasen Yang, Bruno Ribeiro, Jennifer Neville

Research in statistical relational learning has produced a number of methods for learning relational models from large-scale network data.

regression Relational Reasoning

Combining Gradient Boosting Machines with Collective Inference to Predict Continuous Values

no code implementations1 Jul 2016 Iman Alodah, Jennifer Neville

Specifically, we propose a boosting algorithm for learning a collective inference model that predicts a continuous target variable.

regression

Using Bayesian Network Representations for Effective Sampling from Generative Network Models

no code implementations11 Jul 2015 Pablo Robles-Granda, Sebastian Moreno, Jennifer Neville

Bayesian networks (BNs) are used for inference and sampling by exploiting conditional independence among random variables.

Graphlet Decomposition: Framework, Algorithms, and Applications

no code implementations13 Jun 2015 Nesreen K. Ahmed, Jennifer Neville, Ryan A. Rossi, Nick Duffield, Theodore L. Willke

From social science to biology, numerous applications often rely on graphlets for intuitive and meaningful characterization of networks at both the global macro-level as well as the local micro-level.

Learning the Latent State Space of Time-Varying Graphs

no code implementations14 Mar 2014 Nesreen K. Ahmed, Christopher Cole, Jennifer Neville

We use the two representations as inputs to a mixture model to learn the latent state transitions that correspond to important changes in the Email graph structure over time.

Cannot find the paper you are looking for? You can Submit a new open access paper.