Search Results for author: Jiaang Li

Found 9 papers, 4 papers with code

Does Instruction Tuning Make LLMs More Consistent?

no code implementations23 Apr 2024 Constanza Fierro, Jiaang Li, Anders Søgaard

The purpose of instruction tuning is enabling zero-shot performance, but instruction tuning has also been shown to improve chain-of-thought reasoning and value alignment (Si et al., 2023).

Llama

Word Order's Impacts: Insights from Reordering and Generation Analysis

no code implementations18 Mar 2024 Qinghua Zhao, Jiaang Li, Lei LI, Zenghui Zhou, Junfeng Liu

Existing works have studied the impacts of the order of words within natural text.

Exploring Visual Culture Awareness in GPT-4V: A Comprehensive Probing

no code implementations8 Feb 2024 Yong Cao, Wenyan Li, Jiaang Li, Yifei Yuan, Antonia Karamolegkou, Daniel Hershcovich

Pretrained large Vision-Language models have drawn considerable interest in recent years due to their remarkable performance.

Image Captioning TAG

Random Entity Quantization for Parameter-Efficient Compositional Knowledge Graph Representation

1 code implementation24 Oct 2023 Jiaang Li, Quan Wang, Yi Liu, Licheng Zhang, Zhendong Mao

We analyze this phenomenon and reveal that entity codes, the quantization outcomes for expressing entities, have higher entropy at the code level and Jaccard distance at the codeword level under random entity quantization.

Knowledge Graphs Quantization +1

Copyright Violations and Large Language Models

1 code implementation20 Oct 2023 Antonia Karamolegkou, Jiaang Li, Li Zhou, Anders Søgaard

Language models may memorize more than just facts, including entire chunks of texts seen during training.

Memorization

Structural Similarities Between Language Models and Neural Response Measurements

1 code implementation2 Jun 2023 Jiaang Li, Antonia Karamolegkou, Yova Kementchedjhieva, Mostafa Abdou, Sune Lehmann, Anders Søgaard

Human language processing is also opaque, but neural response measurements can provide (noisy) recordings of activation during listening or reading, from which we can extract similar representations of words and phrases.

Brain Decoding

Implications of the Convergence of Language and Vision Model Geometries

no code implementations13 Feb 2023 Jiaang Li, Yova Kementchedjhieva, Anders Søgaard

Large-scale pretrained language models (LMs) are said to ``lack the ability to connect [their] utterances to the world'' (Bender and Koller, 2020).

Cannot find the paper you are looking for? You can Submit a new open access paper.