Search Results for author: Jiayuan Su

Found 2 papers, 0 papers with code

Dialectical Alignment: Resolving the Tension of 3H and Security Threats of LLMs

no code implementations30 Mar 2024 Shu Yang, Jiayuan Su, Han Jiang, Mengdi Li, Keyuan Cheng, Muhammad Asif Ali, Lijie Hu, Di Wang

With the rise of large language models (LLMs), ensuring they embody the principles of being helpful, honest, and harmless (3H), known as Human Alignment, becomes crucial.

knowledge editing Navigate +1

API Is Enough: Conformal Prediction for Large Language Models Without Logit-Access

no code implementations2 Mar 2024 Jiayuan Su, Jing Luo, Hongwei Wang, Lu Cheng

This study aims to address the pervasive challenge of quantifying uncertainty in large language models (LLMs) without logit-access.

Conformal Prediction Open-Ended Question Answering +2

Cannot find the paper you are looking for? You can Submit a new open access paper.