Search Results for author: Minda Hu

Found 11 papers, 2 papers with code

A Survey of Personalized Large Language Models: Progress and Future Directions

1 code implementation17 Feb 2025 Jiahong Liu, Zexuan Qiu, Zhongyang Li, Quanyu Dai, Jieming Zhu, Minda Hu, Menglin Yang, Irwin King

Large Language Models (LLMs) excel in handling general knowledge tasks, yet they struggle with user-specific personalization, such as understanding individual emotions, writing styles, and preferences.

Emotion Recognition General Knowledge +2

NILE: Internal Consistency Alignment in Large Language Models

no code implementations21 Dec 2024 Minda Hu, Qiyuan Zhang, YuFei Wang, Bowei He, Hongru Wang, Jingyan Zhou, Liangyou Li, Yasheng Wang, Chen Ma, Irwin King

However, existing IFT datasets often contain knowledge that is inconsistent with LLMs' internal knowledge learned from the pre-training phase, which can greatly affect the efficacy of IFT.

Purple-teaming LLMs with Adversarial Defender Training

no code implementations1 Jul 2024 Jingyan Zhou, Kun Li, Junan Li, Jiawen Kang, Minda Hu, Xixin Wu, Helen Meng

In PAD, we automatically collect conversational data that cover the vulnerabilities of an LLM around specific safety risks in a self-play manner, where the attacker aims to elicit unsafe responses and the defender generates safe responses to these attacks.

Generative Adversarial Network Red Teaming

Mitigating Large Language Model Hallucination with Faithful Finetuning

no code implementations17 Jun 2024 Minda Hu, Bowei He, YuFei Wang, Liangyou Li, Chen Ma, Irwin King

Large language models (LLMs) have demonstrated remarkable performance on various natural language processing tasks.

Hallucination Language Modeling +5

SeRTS: Self-Rewarding Tree Search for Biomedical Retrieval-Augmented Generation

no code implementations17 Jun 2024 Minda Hu, Licheng Zong, Hongru Wang, Jingyan Zhou, Jingjing Li, Yichen Gao, Kam-Fai Wong, Yu Li, Irwin King

By combining the reasoning capabilities of LLMs with the effectiveness of tree search, SeRTS boosts the zero-shot performance of retrieving high-quality and informative results for RAG.

Question Answering RAG +1

The Integration of Semantic and Structural Knowledge in Knowledge Graph Entity Typing

1 code implementation12 Apr 2024 Muzhi Li, Minda Hu, Irwin King, Ho-fung Leung

The Knowledge Graph Entity Typing (KGET) task aims to predict missing type annotations for entities in knowledge graphs.

Entity Typing Knowledge Graphs +1

RL-GPT: Integrating Reinforcement Learning and Code-as-policy

no code implementations29 Feb 2024 Shaoteng Liu, Haoqi Yuan, Minda Hu, Yanwei Li, Yukang Chen, Shu Liu, Zongqing Lu, Jiaya Jia

To seamlessly integrate both modalities, we introduce a two-level hierarchical framework, RL-GPT, comprising a slow agent and a fast agent.

Minecraft reinforcement-learning +2

Large Language Models as Source Planner for Personalized Knowledge-grounded Dialogue

no code implementations13 Oct 2023 Hongru Wang, Minda Hu, Yang Deng, Rui Wang, Fei Mi, Weichao Wang, Yasheng Wang, Wai-Chung Kwan, Irwin King, Kam-Fai Wong

Open-domain dialogue system usually requires different sources of knowledge to generate more informative and evidential responses.

Response Generation

TPE: Towards Better Compositional Reasoning over Conceptual Tools with Multi-persona Collaboration

no code implementations28 Sep 2023 Hongru Wang, Huimin Wang, Lingzhi Wang, Minda Hu, Rui Wang, Boyang Xue, Hongyuan Lu, Fei Mi, Kam-Fai Wong

Large language models (LLMs) have demonstrated exceptional performance in planning the use of various functional tools, such as calculators and retrievers, particularly in question-answering tasks.

Question Answering Response Generation

Momentum Contrastive Pre-training for Question Answering

no code implementations12 Dec 2022 Minda Hu, Muzhi Li, Yasheng Wang, Irwin King

In order to address this problem, we propose a novel Momentum Contrastive pRe-training fOr queStion anSwering (MCROSS) method for extractive QA.

Benchmarking Contrastive Learning +3

Cannot find the paper you are looking for? You can Submit a new open access paper.