Search Results for author: Yongqi Li

Found 16 papers, 7 papers with code

MMCoQA: Conversational Question Answering over Text, Tables, and Images

1 code implementation ACL 2022 Yongqi Li, Wenjie Li, Liqiang Nie

In this paper, we hence define a novel research task, i. e., multimodal conversational question answering (MMCoQA), aiming to answer users’ questions with multimodal knowledge sources via multi-turn conversations.

Benchmarking Conversational Question Answering +1

Discriminative Probing and Tuning for Text-to-Image Generation

no code implementations7 Mar 2024 Leigang Qu, Wenjie Wang, Yongqi Li, Hanwang Zhang, Liqiang Nie, Tat-Seng Chua

We present a discriminative adapter built on T2I models to probe their discriminative abilities on two representative tasks and leverage discriminative fine-tuning to improve their text-image alignment.

Text-to-Image Generation

Distillation Enhanced Generative Retrieval

no code implementations16 Feb 2024 Yongqi Li, Zhen Zhang, Wenjie Wang, Liqiang Nie, Wenjie Li, Tat-Seng Chua

Generative retrieval is a promising new paradigm in text retrieval that generates identifier strings of relevant passages as the retrieval target.

Retrieval Text Retrieval

Generative Cross-Modal Retrieval: Memorizing Images in Multimodal Language Models for Retrieval and Beyond

no code implementations16 Feb 2024 Yongqi Li, Wenjie Wang, Leigang Qu, Liqiang Nie, Wenjie Li, Tat-Seng Chua

Building upon this capability, we propose to enable multimodal large language models (MLLMs) to memorize and recall images within their parameters.

Cross-Modal Retrieval Retrieval

GliDe with a CaPE: A Low-Hassle Method to Accelerate Speculative Decoding

no code implementations3 Feb 2024 Cunxiao Du, Jing Jiang, Xu Yuanchen, Jiawei Wu, Sicheng Yu, Yongqi Li, Shenggui Li, Kai Xu, Liqiang Nie, Zhaopeng Tu, Yang You

Speculative decoding is a relatively new decoding framework that leverages small and efficient draft models to reduce the latency of LLMs.

Data-efficient Fine-tuning for LLM-based Recommendation

no code implementations30 Jan 2024 Xinyu Lin, Wenjie Wang, Yongqi Li, Shuo Yang, Fuli Feng, Yinwei Wei, Tat-Seng Chua

To pursue the two objectives, we propose a novel data pruning method based on two scores, i. e., influence score and effort score, to efficiently identify the influential samples.

Unlocking Efficiency in Large Language Model Inference: A Comprehensive Survey of Speculative Decoding

1 code implementation15 Jan 2024 Heming Xia, Zhe Yang, Qingxiu Dong, Peiyi Wang, Yongqi Li, Tao Ge, Tianyu Liu, Wenjie Li, Zhifang Sui

To mitigate the high inference latency stemming from autoregressive decoding in Large Language Models (LLMs), Speculative Decoding has emerged as a novel decoding paradigm for LLM inference.

Language Modelling Large Language Model

Temporally and Distributionally Robust Optimization for Cold-Start Recommendation

1 code implementation15 Dec 2023 Xinyu Lin, Wenjie Wang, Jujia Zhao, Yongqi Li, Fuli Feng, Tat-Seng Chua

They learn a feature extractor on warm-start items to align feature representations with interactions, and then leverage the feature extractor to extract the feature representations of cold-start items for interaction prediction.

Collaborative Filtering

A Multi-facet Paradigm to Bridge Large Language Model and Recommendation

no code implementations10 Oct 2023 Xinyu Lin, Wenjie Wang, Yongqi Li, Fuli Feng, See-Kiong Ng, Tat-Seng Chua

To combat these issues, we propose a novel multi-facet paradigm, namely TransRec, to bridge the LLMs to recommendation.

Attribute Language Modelling +2

Learning to Rank in Generative Retrieval

2 code implementations27 Jun 2023 Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li

However, only learning to generate is insufficient for generative retrieval.

Learning-To-Rank Passage Ranking +3

Multiview Identifiers Enhanced Generative Retrieval

1 code implementation26 May 2023 Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li

Instead of simply matching a query to pre-existing passages, generative retrieval generates identifier strings of passages as the retrieval target.

Retrieval

Prompting Large Language Models for Counterfactual Generation: An Empirical Study

no code implementations24 May 2023 Yongqi Li, Mayi Xu, Xin Miao, Shen Zhou, Tieyun Qian

Based on this framework, we 1) investigate the strengths and weaknesses of LLMs as the counterfactual generator, and 2) disclose the factors that affect LLMs when generating counterfactuals, including both the intrinsic properties of LLMs and prompt designing.

counterfactual Data Augmentation +7

Type-Aware Decomposed Framework for Few-Shot Named Entity Recognition

2 code implementations13 Feb 2023 Yongqi Li, Yu Yu, Tieyun Qian

Despite the recent success achieved by several two-stage prototypical networks in few-shot named entity recognition (NER) task, the overdetected false spans at the span detection stage and the inaccurate and unstable prototypes at the type classification stage remain to be challenging problems.

Contrastive Learning Few-shot NER +3

Data Distillation for Text Classification

2 code implementations17 Apr 2021 Yongqi Li, Wenjie Li

In this paper, we study a related but orthogonal issue, data distillation, which aims to distill the knowledge from a large training dataset down to a smaller and synthetic one.

General Classification text-classification +1

A Graph-guided Multi-round Retrieval Method for Conversational Open-domain Question Answering

no code implementations17 Apr 2021 Yongqi Li, Wenjie Li, Liqiang Nie

Moreover, in order to collect more complementary information in the historical context, we also propose to incorporate the multi-round relevance feedback technique to explore the impact of the retrieval context on current question understanding.

Conversational Question Answering Open-Domain Question Answering +1

Incremental Knowledge Based Question Answering

no code implementations18 Jan 2021 Yongqi Li, Wenjie Li, Liqiang Nie

In the past years, Knowledge-Based Question Answering (KBQA), which aims to answer natural language questions using facts in a knowledge base, has been well developed.

Incremental Learning Knowledge Distillation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.