Search Results for author: Zhuochun Li

Found 6 papers, 1 papers with code

Mitigating the Risk of Health Inequity Exacerbated by Large Language Models

no code implementations7 Oct 2024 Yuelyu Ji, Wenhe Ma, Sonish Sivarajkumar, Hang Zhang, Eugene Mathew Sadhu, Zhuochun Li, Xizhi Wu, Shyam Visweswaran, Yanshan Wang

Recent advancements in large language models have demonstrated their potential in numerous medical applications, particularly in automating clinical trial matching for translational research and enhancing medical question answering for clinical decision support.

Bias Detection Question Answering

Learning from Committee: Reasoning Distillation from a Mixture of Teachers with Peer-Review

no code implementations4 Oct 2024 Zhuochun Li, Yuelyu Ji, Rui Meng, Daqing He

While reasoning capabilities typically emerge in large language models (LLMs) with tens of billions of parameters, recent research focuses on improving smaller open-source models through knowledge distillation (KD) from commercial LLMs.

Knowledge Distillation Logical Reasoning

RAG-RLRC-LaySum at BioLaySumm: Integrating Retrieval-Augmented Generation and Readability Control for Layman Summarization of Biomedical Texts

1 code implementation21 May 2024 Yuelyu Ji, Zhuochun Li, Rui Meng, Sonish Sivarajkumar, Yanshan Wang, Zeshui Yu, Hui Ji, Yushui Han, Hanyu Zeng, Daqing He

This paper introduces the RAG-RLRC-LaySum framework, designed to make complex biomedical research understandable to laymen through advanced Natural Language Processing (NLP) techniques.

RAG Retrieval

Effects of Different Prompts on the Quality of GPT-4 Responses to Dementia Care Questions

no code implementations5 Apr 2024 Zhuochun Li, Bo Xie, Robin Hilsabeck, Alyssa Aguirre, Ning Zou, Zhimeng Luo, Daqing He

Evidence suggests that different prompts lead large language models (LLMs) to generate responses with varying quality.

Cannot find the paper you are looking for? You can Submit a new open access paper.