Search Results for author: Jinliang Lu

Found 8 papers, 5 papers with code

Merge, Ensemble, and Cooperate! A Survey on Collaborative Strategies in the Era of Large Language Models

no code implementations8 Jul 2024 Jinliang Lu, Ziliang Pang, Min Xiao, Yaochen Zhu, Rui Xia, Jiajun Zhang

The remarkable success of Large Language Models (LLMs) has ushered natural language processing (NLP) research into a new era.

Diver: Large Language Model Decoding with Span-Level Mutual Information Verification

no code implementations4 Jun 2024 Jinliang Lu, Chen Wang, Jiajun Zhang

Large language models (LLMs) have shown impressive capabilities in adapting to various tasks when provided with task-specific instructions.

Language Modeling Language Modelling +1

X-Instruction: Aligning Language Model in Low-resource Languages with Self-curated Cross-lingual Instructions

1 code implementation30 May 2024 Chong Li, Wen Yang, Jiajun Zhang, Jinliang Lu, Shaonan Wang, Chengqing Zong

In addition, we find that models tuned on cross-lingual instruction following samples can follow the instruction in the output language without further tuning.

Instruction Following Language Modeling +2

Bridging the Gap between Different Vocabularies for LLM Ensemble

1 code implementation15 Apr 2024 Yangyifan Xu, Jinliang Lu, Jiajun Zhang

Ensembling different large language models (LLMs) to unleash their complementary potential and harness their individual strengths is highly valuable.

Arithmetic Reasoning Data-to-Text Generation +1

BLSP: Bootstrapping Language-Speech Pre-training via Behavior Alignment of Continuation Writing

1 code implementation2 Sep 2023 Chen Wang, Minpeng Liao, Zhongqiang Huang, Jinliang Lu, Junhong Wu, Yuchen Liu, Chengqing Zong, Jiajun Zhang

One is a cascaded approach where outputs (tokens or states) of a separately trained speech recognition system are used as inputs for LLMs, which limits their potential in modeling alignment between speech and text.

speech-recognition Speech Recognition +1

Instance-aware Prompt Learning for Language Understanding and Generation

1 code implementation18 Jan 2022 Feihu Jin, Jinliang Lu, Jiajun Zhang, Chengqing Zong

Specifically, we suppose that each learnable prompt token has a different contribution to different instances, and we learn the contribution by calculating the relevance score between an instance and each prompt token.

Few-Shot Learning

Exploiting Curriculum Learning in Unsupervised Neural Machine Translation

1 code implementation Findings (EMNLP) 2021 Jinliang Lu, Jiajun Zhang

Back-translation (BT) has become one of the de facto components in unsupervised neural machine translation (UNMT), and it explicitly makes UNMT have translation ability.

Machine Translation Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.