Search Results for author: Zheni Zeng

Found 12 papers, 8 papers with code

RankCoT: Refining Knowledge for Retrieval-Augmented Generation through Ranking Chain-of-Thoughts

1 code implementation25 Feb 2025 Mingyan Wu, Zhenghao Liu, Yukun Yan, Xinze Li, Shi Yu, Zheni Zeng, Yu Gu, Ge Yu

Retrieval-Augmented Generation (RAG) enhances the performance of Large Language Models (LLMs) by incorporating external knowledge.

RAG Reranking +2

KBAlign: Efficient Self Adaptation on Specific Knowledge Bases

1 code implementation22 Nov 2024 Zheni Zeng, Yuxuan Chen, Shi Yu, Ruobing Wang, Yukun Yan, Zhenghao Liu, Shuo Wang, Xu Han, Zhiyuan Liu, Maosong Sun

Although retrieval-augmented generation (RAG) remains essential for knowledge-based question answering (KBQA), current paradigms face critical challenges under specific domains.

Question Answering RAG +2

RAG-DDR: Optimizing Retrieval-Augmented Generation Using Differentiable Data Rewards

1 code implementation17 Oct 2024 Xinze Li, Sen Mei, Zhenghao Liu, Yukun Yan, Shuo Wang, Shi Yu, Zheni Zeng, Hao Chen, Ge Yu, Zhiyuan Liu, Maosong Sun, Chenyan Xiong

Our experiments on various knowledge-intensive tasks demonstrate that DDR significantly outperforms the SFT method, particularly for LLMs with smaller-scale parameters that depend more on the retrieved knowledge.

RAG Retrieval +1

PersLLM: A Personified Training Approach for Large Language Models

1 code implementation17 Jul 2024 Zheni Zeng, Jiayi Chen, Huimin Chen, Yukun Yan, Yuxuan Chen, Zhenghao Liu, Zhiyuan Liu, Maosong Sun

To fully unlock the potential of LLM personification, we propose PersLLM, a framework for better data construction and model tuning.

Prompt Engineering Specificity

ActiveRAG: Autonomously Knowledge Assimilation and Accommodation through Retrieval-Augmented Agents

1 code implementation21 Feb 2024 Zhipeng Xu, Zhenghao Liu, Yukun Yan, Shuo Wang, Shi Yu, Zheni Zeng, Chaojun Xiao, Zhiyuan Liu, Ge Yu, Chenyan Xiong

Retrieval-Augmented Generation (RAG) enables Large Language Models (LLMs) to leverage external knowledge, enhancing their performance on knowledge-intensive tasks.

Active Learning Position +4

ConPET: Continual Parameter-Efficient Tuning for Large Language Models

1 code implementation26 Sep 2023 Chenyang Song, Xu Han, Zheni Zeng, Kuai Li, Chen Chen, Zhiyuan Liu, Maosong Sun, Tao Yang

First, Static ConPET can adapt former continual learning methods originally designed for relatively smaller models to LLMs through PET and a dynamic replay strategy, which largely reduces the tuning costs and alleviates the over-fitting and forgetting issue.

Continual Learning

Interactive Molecular Discovery with Natural Language

1 code implementation21 Jun 2023 Zheni Zeng, Bangchen Yin, Shipeng Wang, Jiarui Liu, Cheng Yang, Haishen Yao, Xingzhi Sun, Maosong Sun, Guotong Xie, Zhiyuan Liu

Natural language is expected to be a key medium for various human-machine interactions in the era of large language models.

Property Prediction

Knowledge Transfer via Pre-training for Recommendation: A Review and Prospect

no code implementations19 Sep 2020 Zheni Zeng, Chaojun Xiao, Yuan YAO, Ruobing Xie, Zhiyuan Liu, Fen Lin, Leyu Lin, Maosong Sun

Recommender systems aim to provide item recommendations for users, and are usually faced with data sparsity problem (e. g., cold start) in real-world scenarios.

Recommendation Systems Transfer Learning

SQE: a Self Quality Evaluation Metric for Parameters Optimization in Multi-Object Tracking

no code implementations CVPR 2020 Yanru Huang, Feiyu Zhu, Zheni Zeng, Xi Qiu, Yuan Shen, Jia-Nan Wu

We present a novel self quality evaluation metric SQE for parameters optimization in the challenging yet critical multi-object tracking task.

Multi-Object Tracking

Cannot find the paper you are looking for? You can Submit a new open access paper.