Search Results for author: Linhao Yu

Found 8 papers, 6 papers with code

Large Language Model Safety: A Holistic Survey

1 code implementation23 Dec 2024 Dan Shi, Tianhao Shen, Yufei Huang, Zhigen Li, Yongqi Leng, Renren Jin, Chuang Liu, Xinwei Wu, Zishan Guo, Linhao Yu, Ling Shi, Bojian Jiang, Deyi Xiong

The rapid development and deployment of large language models (LLMs) have introduced a new frontier in artificial intelligence, marked by unprecedented capabilities in natural language understanding and generation.

Language Modeling Language Modelling +4

Self-Pluralising Culture Alignment for Large Language Models

1 code implementation16 Oct 2024 Shaoyang Xu, Yongqi Leng, Linhao Yu, Deyi Xiong

In this paper, we propose CultureSPA, a Self-Pluralising Culture Alignment framework that allows LLMs to simultaneously align to pluralistic cultures.

Prompt Engineering

CMoralEval: A Moral Evaluation Benchmark for Chinese Large Language Models

1 code implementation19 Aug 2024 Linhao Yu, Yongqi Leng, Yufei Huang, Shang Wu, Haixin Liu, Xinmeng Ji, Jiahui Zhao, Jinwang Song, Tingting Cui, Xiaoqing Cheng, Tao Liu, Deyi Xiong

These help us curate CMoralEval that encompasses both explicit moral scenarios (14, 964 instances) and moral dilemma scenarios (15, 424 instances), each with instances from different data sources.

Diversity Language Modeling +3

LFED: A Literary Fiction Evaluation Dataset for Large Language Models

1 code implementation16 May 2024 Linhao Yu, Qun Liu, Deyi Xiong

The rapid evolution of large language models (LLMs) has ushered in the need for comprehensive assessments of their performance across various dimensions.

OpenEval: Benchmarking Chinese LLMs across Capability, Alignment and Safety

no code implementations18 Mar 2024 Chuang Liu, Linhao Yu, Jiaxuan Li, Renren Jin, Yufei Huang, Ling Shi, Junhui Zhang, Xinmeng Ji, Tingting Cui, Tao Liu, Jinwang Song, Hongying Zan, Sun Li, Deyi Xiong

In addition to these benchmarks, we have implemented a phased public evaluation and benchmark update strategy to ensure that OpenEval is in line with the development of Chinese LLMs or even able to provide cutting-edge benchmark datasets to guide the development of Chinese LLMs.

Benchmarking Mathematical Reasoning

Identifying Multiple Personalities in Large Language Models with External Evaluation

no code implementations22 Feb 2024 Xiaoyang Song, Yuta Adachi, Jessie Feng, Mouwei Lin, Linhao Yu, Frank Li, Akshat Gupta, Gopala Anumanchipalli, Simerjot Kaur

In this paper, we investigate LLM personalities using an alternate personality measurement method, which we refer to as the external evaluation method, where instead of prompting LLMs with multiple-choice questions in the Likert scale, we evaluate LLMs' personalities by analyzing their responses toward open-ended situational questions using an external machine learning model.

Multiple-choice

Evaluating Large Language Models: A Comprehensive Survey

1 code implementation30 Oct 2023 Zishan Guo, Renren Jin, Chuang Liu, Yufei Huang, Dan Shi, Supryadi, Linhao Yu, Yan Liu, Jiaxuan Li, Bojian Xiong, Deyi Xiong

We hope that this comprehensive overview will stimulate further research interests in the evaluation of LLMs, with the ultimate goal of making evaluation serve as a cornerstone in guiding the responsible development of LLMs.

Survey

Cannot find the paper you are looking for? You can Submit a new open access paper.