Search Results for author: Changjiang Gao

Found 3 papers, 2 papers with code

Multilingual Pretraining and Instruction Tuning Improve Cross-Lingual Knowledge Alignment, But Only Shallowly

1 code implementation6 Apr 2024 Changjiang Gao, Hongda Hu, Peng Hu, Jiajun Chen, Jixing Li, ShuJian Huang

In this paper, we propose CLiKA, a systematic framework to assess the cross-lingual knowledge alignment of LLMs in the Performance, Consistency and Conductivity levels, and explored the effect of multilingual pretraining and instruction tuning on the degree of alignment.

Measuring Meaning Composition in the Human Brain with Composition Scores from Large Language Models

no code implementations7 Mar 2024 Changjiang Gao, Jixing Li, Jiajun Chen, ShuJian Huang

Drawing on the key-value memory interpretation of transformer feed-forward network blocks, we introduce the Composition Score, a novel model-based metric designed to quantify the degree of meaning composition during sentence comprehension.

Sentence

Roles of Scaling and Instruction Tuning in Language Perception: Model vs. Human Attention

1 code implementation29 Oct 2023 Changjiang Gao, ShuJian Huang, Jixing Li, Jiajun Chen

Recent large language models (LLMs) have revealed strong abilities to understand natural language.

Cannot find the paper you are looking for? You can Submit a new open access paper.