Search Results for author: Shelby Heinecke

Found 21 papers, 9 papers with code

AgentLite: A Lightweight Library for Building and Advancing Task-Oriented LLM Agent System

1 code implementation23 Feb 2024 Zhiwei Liu, Weiran Yao, JianGuo Zhang, Liangwei Yang, Zuxin Liu, Juntao Tan, Prafulla K. Choubey, Tian Lan, Jason Wu, Huan Wang, Shelby Heinecke, Caiming Xiong, Silvio Savarese

Thus, we open-source a new AI agent library, AgentLite, which simplifies this process by offering a lightweight, user-friendly platform for innovating LLM agent reasoning, architectures, and applications with ease.

AgentOhana: Design Unified Data and Training Pipeline for Effective Agent Learning

2 code implementations23 Feb 2024 JianGuo Zhang, Tian Lan, Rithesh Murthy, Zhiwei Liu, Weiran Yao, Juntao Tan, Thai Hoang, Liangwei Yang, Yihao Feng, Zuxin Liu, Tulika Awalgaonkar, Juan Carlos Niebles, Silvio Savarese, Shelby Heinecke, Huan Wang, Caiming Xiong

It meticulously standardizes and unifies these trajectories into a consistent format, streamlining the creation of a generic data loader optimized for agent training.

Causal Layering via Conditional Entropy

no code implementations19 Jan 2024 Itai Feigenbaum, Devansh Arpit, Huan Wang, Shelby Heinecke, Juan Carlos Niebles, Weiran Yao, Caiming Xiong, Silvio Savarese

Under appropriate assumptions and conditioning, we can separate the sources or sinks from the remainder of the nodes by comparing their conditional entropy to the unconditional entropy of their noise.

Causal Discovery

Editing Arbitrary Propositions in LLMs without Subject Labels

no code implementations15 Jan 2024 Itai Feigenbaum, Devansh Arpit, Huan Wang, Shelby Heinecke, Juan Carlos Niebles, Weiran Yao, Caiming Xiong, Silvio Savarese

On datasets of binary propositions derived from the CounterFact dataset, we show that our method -- without access to subject labels -- performs close to state-of-the-art L\&E methods which has access subject labels.

Language Modelling Large Language Model +1

DRDT: Dynamic Reflection with Divergent Thinking for LLM-based Sequential Recommendation

no code implementations18 Dec 2023 Yu Wang, Zhiwei Liu, JianGuo Zhang, Weiran Yao, Shelby Heinecke, Philip S. Yu

With our principle, we managed to outperform GPT-Turbo-3. 5 on three datasets using 7b models e. g., Vicuna-7b and Openchat-7b on NDCG@10.

In-Context Learning Sequential Recommendation

Enhancing Performance on Seen and Unseen Dialogue Scenarios using Retrieval-Augmented End-to-End Task-Oriented System

no code implementations16 Aug 2023 JianGuo Zhang, Stephen Roller, Kun Qian, Zhiwei Liu, Rui Meng, Shelby Heinecke, Huan Wang, Silvio Savarese, Caiming Xiong

End-to-end task-oriented dialogue (TOD) systems have achieved promising performance by leveraging sophisticated natural language understanding and natural language generation capabilities of pre-trained models.

Natural Language Understanding Retrieval +1

Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization

no code implementations4 Aug 2023 Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, JianGuo Zhang, Devansh Arpit, ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese

This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.

Language Modelling

DialogStudio: Towards Richest and Most Diverse Unified Dataset Collection for Conversational AI

1 code implementation19 Jul 2023 JianGuo Zhang, Kun Qian, Zhiwei Liu, Shelby Heinecke, Rui Meng, Ye Liu, Zhou Yu, Huan Wang, Silvio Savarese, Caiming Xiong

Despite advancements in conversational AI, language models encounter challenges to handle diverse conversational tasks, and existing dialogue dataset collections often lack diversity and comprehensiveness.

Few-Shot Learning Language Modelling +1

Zero-shot Item-based Recommendation via Multi-task Product Knowledge Graph Pre-Training

no code implementations12 May 2023 Ziwei Fan, Zhiwei Liu, Shelby Heinecke, JianGuo Zhang, Huan Wang, Caiming Xiong, Philip S. Yu

This paper presents a novel paradigm for the Zero-Shot Item-based Recommendation (ZSIR) task, which pre-trains a model on product knowledge graph (PKG) to refine the item features from PLMs.

Recommendation Systems

Towards More Robust and Accurate Sequential Recommendation with Cascade-guided Adversarial Training

no code implementations11 Apr 2023 Juntao Tan, Shelby Heinecke, Zhiwei Liu, Yongjun Chen, Yongfeng Zhang, Huan Wang

Two properties unique to the nature of sequential recommendation models may impair their robustness - the cascade effects induced during training and the model's tendency to rely too heavily on temporal information.

Sequential Recommendation

On the Unlikelihood of D-Separation

no code implementations10 Mar 2023 Itai Feigenbaum, Huan Wang, Shelby Heinecke, Juan Carlos Niebles, Weiran Yao, Caiming Xiong, Devansh Arpit

We then provide an analytic average case analysis of the PC Algorithm for causal discovery, as well as a variant of the SGS Algorithm we call UniformSGS.

Causal Discovery

Tackling Data Heterogeneity in Federated Learning with Class Prototypes

1 code implementation6 Dec 2022 Yutong Dai, Zeyuan Chen, Junnan Li, Shelby Heinecke, Lichao Sun, ran Xu

We propose FedNH, a novel method that improves the local models' performance for both personalization and generalization by combining the uniformity and semantics of class prototypes.

Personalized Federated Learning

Dynamic Causal Collaborative Filtering

1 code implementation23 Aug 2022 Shuyuan Xu, Juntao Tan, Zuohui Fu, Jianchao Ji, Shelby Heinecke, Yongfeng Zhang

As a result, it is important to incorporate loops into the causal graphs to accurately model the dynamic and iterative data generation process for recommender systems.

Collaborative Filtering counterfactual +2

RGRecSys: A Toolkit for Robustness Evaluation of Recommender Systems

1 code implementation12 Jan 2022 Zohreh Ovaisi, Shelby Heinecke, Jia Li, Yongfeng Zhang, Elena Zheleva, Caiming Xiong

Robust machine learning is an increasingly important topic that focuses on developing models resilient to various forms of imperfect data.

Recommendation Systems

Deconfounded Causal Collaborative Filtering

1 code implementation14 Oct 2021 Shuyuan Xu, Juntao Tan, Shelby Heinecke, Jia Li, Yongfeng Zhang

Experiments on real-world datasets show that our method is able to deconfound unobserved confounders to achieve better recommendation performance.

Collaborative Filtering Recommendation Systems

Communication-Aware Collaborative Learning

no code implementations19 Dec 2020 Avrim Blum, Shelby Heinecke, Lev Reyzin

In this paper, we study collaborative PAC learning with the goal of reducing communication cost at essentially no penalty to the sample complexity.

Classification General Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.