1 code implementation • ACL 2022 • Yongqi Li, Wenjie Li, Liqiang Nie
In this paper, we hence define a novel research task, i. e., multimodal conversational question answering (MMCoQA), aiming to answer users’ questions with multimodal knowledge sources via multi-turn conversations.
no code implementations • 7 Mar 2024 • Leigang Qu, Wenjie Wang, Yongqi Li, Hanwang Zhang, Liqiang Nie, Tat-Seng Chua
We present a discriminative adapter built on T2I models to probe their discriminative abilities on two representative tasks and leverage discriminative fine-tuning to improve their text-image alignment.
no code implementations • 16 Feb 2024 • Yongqi Li, Zhen Zhang, Wenjie Wang, Liqiang Nie, Wenjie Li, Tat-Seng Chua
Generative retrieval is a promising new paradigm in text retrieval that generates identifier strings of relevant passages as the retrieval target.
no code implementations • 16 Feb 2024 • Yongqi Li, Wenjie Wang, Leigang Qu, Liqiang Nie, Wenjie Li, Tat-Seng Chua
Building upon this capability, we propose to enable multimodal large language models (MLLMs) to memorize and recall images within their parameters.
no code implementations • 3 Feb 2024 • Cunxiao Du, Jing Jiang, Xu Yuanchen, Jiawei Wu, Sicheng Yu, Yongqi Li, Shenggui Li, Kai Xu, Liqiang Nie, Zhaopeng Tu, Yang You
Speculative decoding is a relatively new decoding framework that leverages small and efficient draft models to reduce the latency of LLMs.
no code implementations • 30 Jan 2024 • Xinyu Lin, Wenjie Wang, Yongqi Li, Shuo Yang, Fuli Feng, Yinwei Wei, Tat-Seng Chua
To pursue the two objectives, we propose a novel data pruning method based on two scores, i. e., influence score and effort score, to efficiently identify the influential samples.
1 code implementation • 15 Jan 2024 • Heming Xia, Zhe Yang, Qingxiu Dong, Peiyi Wang, Yongqi Li, Tao Ge, Tianyu Liu, Wenjie Li, Zhifang Sui
To mitigate the high inference latency stemming from autoregressive decoding in Large Language Models (LLMs), Speculative Decoding has emerged as a novel decoding paradigm for LLM inference.
1 code implementation • 15 Dec 2023 • Xinyu Lin, Wenjie Wang, Jujia Zhao, Yongqi Li, Fuli Feng, Tat-Seng Chua
They learn a feature extractor on warm-start items to align feature representations with interactions, and then leverage the feature extractor to extract the feature representations of cold-start items for interaction prediction.
no code implementations • 10 Oct 2023 • Xinyu Lin, Wenjie Wang, Yongqi Li, Fuli Feng, See-Kiong Ng, Tat-Seng Chua
To combat these issues, we propose a novel multi-facet paradigm, namely TransRec, to bridge the LLMs to recommendation.
2 code implementations • 27 Jun 2023 • Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li
However, only learning to generate is insufficient for generative retrieval.
1 code implementation • 26 May 2023 • Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li
Instead of simply matching a query to pre-existing passages, generative retrieval generates identifier strings of passages as the retrieval target.
no code implementations • 24 May 2023 • Yongqi Li, Mayi Xu, Xin Miao, Shen Zhou, Tieyun Qian
Based on this framework, we 1) investigate the strengths and weaknesses of LLMs as the counterfactual generator, and 2) disclose the factors that affect LLMs when generating counterfactuals, including both the intrinsic properties of LLMs and prompt designing.
2 code implementations • 13 Feb 2023 • Yongqi Li, Yu Yu, Tieyun Qian
Despite the recent success achieved by several two-stage prototypical networks in few-shot named entity recognition (NER) task, the overdetected false spans at the span detection stage and the inaccurate and unstable prototypes at the type classification stage remain to be challenging problems.
Ranked #2 on Few-shot NER on Few-NERD (INTRA)
2 code implementations • 17 Apr 2021 • Yongqi Li, Wenjie Li
In this paper, we study a related but orthogonal issue, data distillation, which aims to distill the knowledge from a large training dataset down to a smaller and synthetic one.
no code implementations • 17 Apr 2021 • Yongqi Li, Wenjie Li, Liqiang Nie
Moreover, in order to collect more complementary information in the historical context, we also propose to incorporate the multi-round relevance feedback technique to explore the impact of the retrieval context on current question understanding.
Conversational Question Answering Open-Domain Question Answering +1
no code implementations • 18 Jan 2021 • Yongqi Li, Wenjie Li, Liqiang Nie
In the past years, Knowledge-Based Question Answering (KBQA), which aims to answer natural language questions using facts in a knowledge base, has been well developed.