Search Results for author: Yunshui Li

Found 5 papers, 3 papers with code

One Shot Learning as Instruction Data Prospector for Large Language Models

1 code implementation16 Dec 2023 Yunshui Li, Binyuan Hui, Xiaobo Xia, Jiaxi Yang, Min Yang, Lei Zhang, Shuzheng Si, Junhao Liu, Tongliang Liu, Fei Huang, Yongbin Li

Nuggets assesses the potential of individual instruction examples to act as effective one shot examples, thereby identifying those that can significantly enhance diverse task performance.

One-Shot Learning

Marathon: A Race Through the Realm of Long Context with Large Language Models

no code implementations15 Dec 2023 Lei Zhang, Yunshui Li, Ziqiang Liu, Jiaxi Yang, Junhao Liu, Min Yang

Although there are currently many benchmarks available for evaluating the long context understanding and reasoning capability of large language models, with the expansion of the context window in these models, the existing long context benchmarks are no longer sufficient for evaluating the long context understanding and reasoning capability of large language models.

Long-Context Understanding Multiple-choice

VDialogUE: A Unified Evaluation Benchmark for Visually-grounded Dialogue

no code implementations14 Sep 2023 Yunshui Li, Binyuan Hui, Zhaochao Yin, Wanwei He, Run Luo, Yuxing Long, Min Yang, Fei Huang, Yongbin Li

Visually-grounded dialog systems, which integrate multiple modes of communication such as text and visual inputs, have become an increasingly popular area of investigation.

PaCE: Unified Multi-modal Dialogue Pre-training with Progressive and Compositional Experts

1 code implementation24 May 2023 Yunshui Li, Binyuan Hui, Zhichao Yin, Min Yang, Fei Huang, Yongbin Li

It utilizes a combination of several fundamental experts to accommodate multiple dialogue-related tasks and can be pre-trained using limited dialogue and extensive non-dialogue multi-modal data.

Dialogue State Tracking Image Retrieval +4

Self-Distillation with Meta Learning for Knowledge Graph Completion

1 code implementation Findings of the Association for Computational Linguistics: EMNLP 2022 2022 Yunshui Li, Junhao Liu, Chengming Li, Min Yang

In this paper, we propose a selfdistillation framework with meta learning(MetaSD) for knowledge graph completion with dynamic pruning, which aims to learn compressed graph embeddings and tackle the longtail samples.

Knowledge Graph Completion Meta-Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.