Search Results for author: Zelong Li

Found 14 papers, 9 papers with code

Towards LLM-RecSys Alignment with Textual ID Learning

1 code implementation27 Mar 2024 Juntao Tan, Shuyuan Xu, Wenyue Hua, Yingqiang Ge, Zelong Li, Yongfeng Zhang

The results show that the zero-shot performance of the pre-trained foundation model is comparable to or even better than some traditional recommendation models based on supervised training, showing the potential of the IDGen paradigm serving as the foundation model for generative recommendation.

Sequential Recommendation Text Generation

AIOS: LLM Agent Operating System

2 code implementations25 Mar 2024 Kai Mei, Zelong Li, Shuyuan Xu, Ruosong Ye, Yingqiang Ge, Yongfeng Zhang

Inspired by these challenges, this paper presents AIOS, an LLM agent operating system, which embeds large language model into operating systems (OS) as the brain of the OS, enabling an operating system "with soul" -- an important step towards AGI.

Language Modelling Large Language Model +1

TrustAgent: Towards Safe and Trustworthy LLM-based Agents through Agent Constitution

1 code implementation2 Feb 2024 Wenyue Hua, Xianjun Yang, Zelong Li, Wei Cheng, Yongfeng Zhang

This paper presents an Agent-Constitution-based agent framework, TrustAgent, an initial investigation into improving the safety dimension of trustworthiness in LLM-based agents.

Formal-LLM: Integrating Formal Language and Natural Language for Controllable LLM-based Agents

1 code implementation1 Feb 2024 Zelong Li, Wenyue Hua, Hao Wang, He Zhu, Yongfeng Zhang

A stack-based LLM plan generation process is then conducted under the supervision of the automaton to ensure that the generated plan satisfies the constraints, making the planning process controllable.

PAP-REC: Personalized Automatic Prompt for Recommendation Language Model

1 code implementation1 Feb 2024 Zelong Li, Jianchao Ji, Yingqiang Ge, Wenyue Hua, Yongfeng Zhang

In this paper, we propose PAP-REC, a framework to generate the Personalized Automatic Prompt for RECommendation language models to mitigate the inefficiency and ineffectiveness problems derived from manually designed prompts.

Language Modelling

GenRec: Large Language Model for Generative Recommendation

1 code implementation2 Jul 2023 Jianchao Ji, Zelong Li, Shuyuan Xu, Wenyue Hua, Yingqiang Ge, Juntao Tan, Yongfeng Zhang

Subsequently, we use these prompts to fine-tune the LLaMA backbone LLM on a dataset of user-item interactions, represented by textual data, to capture user preferences and item characteristics.

Language Modelling Large Language Model +1

Counterfactual Collaborative Reasoning

no code implementations30 Jun 2023 Jianchao Ji, Zelong Li, Shuyuan Xu, Max Xiong, Juntao Tan, Yingqiang Ge, Hao Wang, Yongfeng Zhang

In this paper, we explore how the two reasoning abilities can be jointly modeled to enhance both accuracy and explainability of machine learning models.

counterfactual Counterfactual Reasoning +3

OpenAGI: When LLM Meets Domain Experts

1 code implementation NeurIPS 2023 Yingqiang Ge, Wenyue Hua, Kai Mei, Jianchao Ji, Juntao Tan, Shuyuan Xu, Zelong Li, Yongfeng Zhang

This capability is vital for Artificial Intelligence (AI) and should be embedded in comprehensive AI Agents, enabling them to harness expert models for complex task-solving towards Artificial General Intelligence (AGI).

Benchmarking Natural Language Queries

A Survey on Trustworthy Recommender Systems

no code implementations25 Jul 2022 Yingqiang Ge, Shuchang Liu, Zuohui Fu, Juntao Tan, Zelong Li, Shuyuan Xu, Yunqi Li, Yikun Xian, Yongfeng Zhang

Recommender systems (RS), serving at the forefront of Human-centered AI, are widely deployed in almost every corner of the web and facilitate the human decision-making process.

Decision Making Explainable Recommendation +2

AutoLossGen: Automatic Loss Function Generation for Recommender Systems

1 code implementation27 Apr 2022 Zelong Li, Jianchao Ji, Yingqiang Ge, Yongfeng Zhang

One challenge for automatic loss generation in recommender systems is the extreme sparsity of recommendation datasets, which leads to the sparse reward problem for loss generation and search.

Recommendation Systems

Explainable Fairness in Recommendation

no code implementations24 Apr 2022 Yingqiang Ge, Juntao Tan, Yan Zhu, Yinglong Xia, Jiebo Luo, Shuchang Liu, Zuohui Fu, Shijie Geng, Zelong Li, Yongfeng Zhang

In this paper, we study the problem of explainable fairness, which helps to gain insights about why a system is fair or unfair, and guides the design of fair recommender systems with a more informed and unified methodology.

counterfactual Fairness +1

From Kepler to Newton: Explainable AI for Science

no code implementations24 Nov 2021 Zelong Li, Jianchao Ji, Yongfeng Zhang

We show how computational and data-intensive methodology -- together with experimental and theoretical methodology -- can be seamlessly integrated for scientific research.

Counterfactual Evaluation for Explainable AI

no code implementations5 Sep 2021 Yingqiang Ge, Shuchang Liu, Zelong Li, Shuyuan Xu, Shijie Geng, Yunqi Li, Juntao Tan, Fei Sun, Yongfeng Zhang

While recent years have witnessed the emergence of various explainable methods in machine learning, to what degree the explanations really represent the reasoning process behind the model prediction -- namely, the faithfulness of explanation -- is still an open problem.

counterfactual Counterfactual Reasoning

Efficient Non-Sampling Knowledge Graph Embedding

1 code implementation21 Apr 2021 Zelong Li, Jianchao Ji, Zuohui Fu, Yingqiang Ge, Shuyuan Xu, Chong Chen, Yongfeng Zhang

Experiments on benchmark datasets show that our NS-KGE framework can achieve a better performance on efficiency and accuracy over traditional negative sampling based models, and that the framework is applicable to a large class of knowledge graph embedding models.

Knowledge Graph Embedding

Cannot find the paper you are looking for? You can Submit a new open access paper.