Search Results for author: Kashun Shum

Found 5 papers, 5 papers with code

RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models

1 code implementation31 Dec 2023 Yuanhao Wu, Juno Zhu, Siliang Xu, Kashun Shum, Cheng Niu, Randy Zhong, Juntong Song, Tong Zhang

Retrieval-augmented generation (RAG) has become a main technique for alleviating hallucinations in large language models (LLMs).

Hallucination Retrieval

Plum: Prompt Learning using Metaheuristic

1 code implementation14 Nov 2023 Rui Pan, Shuo Xing, Shizhe Diao, Wenhe Sun, Xiang Liu, Kashun Shum, Renjie Pi, Jipeng Zhang, Tong Zhang

Since the emergence of large language models, prompt learning has become a popular method for optimizing and customizing these models.

Image Generation

RAFT: Reward rAnked FineTuning for Generative Foundation Model Alignment

1 code implementation13 Apr 2023 Hanze Dong, Wei Xiong, Deepanshu Goyal, Yihan Zhang, Winnie Chow, Rui Pan, Shizhe Diao, Jipeng Zhang, Kashun Shum, Tong Zhang

Utilizing a reward model and a sufficient number of samples, our approach selects the high-quality samples, discarding those that exhibit undesired behavior, and subsequently enhancing the model by fine-tuning on these filtered samples.

Ethics

Automatic Prompt Augmentation and Selection with Chain-of-Thought from Labeled Data

2 code implementations24 Feb 2023 Kashun Shum, Shizhe Diao, Tong Zhang

However, most CoT studies rely on carefully designed human-annotated rational chains to prompt LLMs, posing challenges for real-world applications where labeled data is available without rational chains.

Arithmetic Reasoning Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.