Search Results for author: Liang Pang

Found 60 papers, 33 papers with code

Optimal Partial Transport Based Sentence Selection for Long-form Document Matching

1 code implementation COLING 2022 Weijie Yu, Liang Pang, Jun Xu, Bing Su, Zhenhua Dong, Ji-Rong Wen

Enjoying the partial transport properties of OPT, the selected key sentences can not only effectively enhance the matching accuracy, but also be explained as the rationales for the matching results.

Sentence

Qsnail: A Questionnaire Dataset for Sequential Question Generation

1 code implementation22 Feb 2024 Yan Lei, Liang Pang, Yuanzhuo Wang, HuaWei Shen, Xueqi Cheng

Questionnaires entail a series of questions that must conform to intricate constraints involving the questions, options, and overall structure.

Event-aware Video Corpus Moment Retrieval

no code implementations21 Feb 2024 Danyang Hou, Liang Pang, HuaWei Shen, Xueqi Cheng

Video Corpus Moment Retrieval (VCMR) is a practical video retrieval task focused on identifying a specific moment within a vast corpus of untrimmed videos using the natural language query.

Contrastive Learning Moment Retrieval +4

Improving Video Corpus Moment Retrieval with Partial Relevance Enhancement

no code implementations21 Feb 2024 Danyang Hou, Liang Pang, HuaWei Shen, Xueqi Cheng

The relevance between the video and query is partial, mainly evident in two aspects: (1) Scope: The untrimmed video contains information-rich frames, and not all are relevant to the query.

Moment Retrieval Retrieval +2

Stable Knowledge Editing in Large Language Models

no code implementations20 Feb 2024 Zihao Wei, Liang Pang, Hanxing Ding, Jingcheng Deng, HuaWei Shen, Xueqi Cheng

The premise of localization results in an incomplete knowledge editing, whereas an isolated assumption may impair both other knowledge and general abilities.

knowledge editing

Retrieve Only When It Needs: Adaptive Retrieval Augmentation for Hallucination Mitigation in Large Language Models

no code implementations16 Feb 2024 Hanxing Ding, Liang Pang, Zihao Wei, HuaWei Shen, Xueqi Cheng

A careful and balanced integration of the parametric knowledge within LLMs with external information is crucial to alleviate hallucinations.

Hallucination Retrieval

Structured, Complex and Time-complete Temporal Event Forecasting

1 code implementation2 Dec 2023 Yunshan Ma, Chenchen Ye, Zijian Wu, Xiang Wang, Yixin Cao, Liang Pang, Tat-Seng Chua

To address these limitations, we introduce a novel formulation for Structured, Complex, and Time-complete Temporal Event (SCTc-TE).

AI-Generated Images Introduce Invisible Relevance Bias to Text-Image Retrieval

no code implementations23 Nov 2023 Shicheng Xu, Danyang Hou, Liang Pang, Jingcheng Deng, Jun Xu, HuaWei Shen, Xueqi Cheng

Furthermore, our subsequent exploration reveals that the inclusion of AI-generated images in the training data of the retrieval models exacerbates the invisible relevance bias.

Cross-Modal Retrieval Image Retrieval +2

HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data

1 code implementation22 Nov 2023 Qifan Yu, Juncheng Li, Longhui Wei, Liang Pang, Wentao Ye, Bosheng Qin, Siliang Tang, Qi Tian, Yueting Zhuang

Multi-modal Large Language Models (MLLMs) tuned on machine-generated instruction-following data have demonstrated remarkable performance in various multi-modal understanding and generation tasks.

Attribute counterfactual +3

De-fine: Decomposing and Refining Visual Programs with Auto-Feedback

no code implementations21 Nov 2023 Minghe Gao, Juncheng Li, Hao Fei, Liang Pang, Wei Ji, Guoming Wang, Wenqiao Zhang, Siliang Tang, Yueting Zhuang

Visual programming, a modular and generalizable paradigm, integrates different modules and Python operators to solve various vision-language tasks.

Logical Reasoning

Think Before You Speak: Cultivating Communication Skills of Large Language Models via Inner Monologue

no code implementations13 Nov 2023 Junkai Zhou, Liang Pang, HuaWei Shen, Xueqi Cheng

The emergence of large language models (LLMs) further improves the capabilities of open-domain dialogue systems and can generate fluent, coherent, and diverse responses.

Dialogue Generation In-Context Learning +2

Do LLMs Implicitly Exhibit User Discrimination in Recommendation? An Empirical Study

no code implementations13 Nov 2023 Chen Xu, Wenjie Wang, Yuxin Li, Liang Pang, Jun Xu, Tat-Seng Chua

Recently, Large Language Models (LLMs) have enhanced user interaction, enabling seamless information retrieval and recommendations.

Information Retrieval Recommendation Systems +1

Plot Retrieval as an Assessment of Abstract Semantic Association

no code implementations3 Nov 2023 Shicheng Xu, Liang Pang, Jiangnan Li, Mo Yu, Fandong Meng, HuaWei Shen, Xueqi Cheng, Jie zhou

Readers usually only give an abstract and vague description as the query based on their own understanding, summaries, or speculations of the plot, which requires the retrieval model to have a strong ability to estimate the abstract semantic associations between the query and candidate plots.

Information Retrieval Retrieval

RegaVAE: A Retrieval-Augmented Gaussian Mixture Variational Auto-Encoder for Language Modeling

1 code implementation16 Oct 2023 Jingcheng Deng, Liang Pang, HuaWei Shen, Xueqi Cheng

It encodes the text corpus into a latent space, capturing current and future information from both source and target text.

Hallucination Language Modelling +2

Multi-level Adaptive Contrastive Learning for Knowledge Internalization in Dialogue Generation

no code implementations13 Oct 2023 Chenxu Yang, Zheng Lin, Lanrui Wang, Chong Tian, Liang Pang, Jiangnan Li, Qirong Ho, Yanan Cao, Weiping Wang

Knowledge-grounded dialogue generation aims to mitigate the issue of text degeneration by incorporating external knowledge to supplement the context.

Contrastive Learning Dialogue Generation

MacLaSa: Multi-Aspect Controllable Text Generation via Efficient Sampling from Compact Latent Space

1 code implementation22 May 2023 Hanxing Ding, Liang Pang, Zihao Wei, HuaWei Shen, Xueqi Cheng, Tat-Seng Chua

Multi-aspect controllable text generation aims to generate fluent sentences that possess multiple desired attributes simultaneously.

Attribute Text Generation

Visual Transformation Telling

no code implementations3 May 2023 Xin Hong, Yanyan Lan, Liang Pang, Jiafeng Guo, Xueqi Cheng

In this paper, we propose a new visual reasoning task, called Visual Transformation Telling (VTT).

Dense Video Captioning Visual Reasoning +1

Visual Reasoning: from State to Transformation

1 code implementation2 May 2023 Xin Hong, Yanyan Lan, Liang Pang, Jiafeng Guo, Xueqi Cheng

Such \textbf{state driven} visual reasoning has limitations in reflecting the ability to infer the dynamics between different states, which has shown to be equally important for human cognition in Piaget's theory.

Visual Question Answering (VQA) Visual Reasoning

Search-in-the-Chain: Towards Accurate, Credible and Traceable Large Language Models for Knowledge-intensive Tasks

1 code implementation28 Apr 2023 Shicheng Xu, Liang Pang, HuaWei Shen, Xueqi Cheng, Tat-Seng Chua

Second, IR verifies the answer of each node of CoQ, it corrects the answer that is not consistent with the retrieved information when IR gives high confidence, which improves the credibility.

Fact Checking Information Retrieval +6

Multi-video Moment Ranking with Multimodal Clue

no code implementations29 Jan 2023 Danyang Hou, Liang Pang, Yanyan Lan, HuaWei Shen, Xueqi Cheng

In this paper, we focus on improving two problems of two-stage method: (1) Moment prediction bias: The predicted moments for most queries come from the top retrieved videos, ignoring the possibility that the target moment is in the bottom retrieved videos, which is caused by the inconsistency of Shared Normalization during training and inference.

Moment Retrieval Retrieval +1

Cross-Model Comparative Loss for Enhancing Neuronal Utility in Language Understanding

no code implementations10 Jan 2023 Yunchang Zhu, Liang Pang, Kangxi Wu, Yanyan Lan, HuaWei Shen, Xueqi Cheng

Comparative loss is essentially a ranking loss on top of the task-specific losses of the full and ablated models, with the expectation that the task-specific loss of the full model is minimal.

Natural Language Understanding Network Pruning

NIR-Prompt: A Multi-task Generalized Neural Information Retrieval Training Framework

1 code implementation1 Dec 2022 Shicheng Xu, Liang Pang, HuaWei Shen, Xueqi Cheng

Different needs correspond to different IR tasks such as document retrieval, open-domain question answering, retrieval-based dialogue, etc., while they share the same schema to estimate the relationship between texts.

Information Retrieval Open-Domain Question Answering +1

LoL: A Comparative Regularization Loss over Query Reformulation Losses for Pseudo-Relevance Feedback

1 code implementation25 Apr 2022 Yunchang Zhu, Liang Pang, Yanyan Lan, HuaWei Shen, Xueqi Cheng

Ideally, if a PRF model can distinguish between irrelevant and relevant information in the feedback, the more feedback documents there are, the better the revised query will be.

Retrieval

Uncertainty Calibration for Ensemble-Based Debiasing Methods

no code implementations NeurIPS 2021 Ruibin Xiong, Yimeng Chen, Liang Pang, Xueqi Chen, Yanyan Lan

Ensemble-based debiasing methods have been shown effective in mitigating the reliance of classifiers on specific dataset bias, by exploiting the output of a bias-only model to adjust the learning target.

Fact Verification

Transductive Learning for Unsupervised Text Style Transfer

1 code implementation EMNLP 2021 Fei Xiao, Liang Pang, Yanyan Lan, Yan Wang, HuaWei Shen, Xueqi Cheng

The proposed transductive learning approach is general and effective to the task of unsupervised style transfer, and we will apply it to the other two typical methods in the future.

Retrieval Style Transfer +3

Toward the Understanding of Deep Text Matching Models for Information Retrieval

no code implementations16 Aug 2021 Lijuan Chen, Yanyan Lan, Liang Pang, Jiafeng Guo, Xueqi Cheng

We further extend these constraints to the semantic settings, which are shown to be better satisfied for all the deep text matching models.

Information Retrieval Retrieval +2

Modeling Relevance Ranking under the Pre-training and Fine-tuning Paradigm

no code implementations12 Aug 2021 Lin Bo, Liang Pang, Gang Wang, Jun Xu, Xiuqiang He, Ji-Rong Wen

Experimental results base on three publicly available benchmarks showed that in both of the implementations, Pre-Rank can respectively outperform the underlying ranking models and achieved state-of-the-art performances.

Document Ranking Information Retrieval +3

Sketch and Customize: A Counterfactual Story Generator

1 code implementation2 Apr 2021 Changying Hao, Liang Pang, Yanyan Lan, Yan Wang, Jiafeng Guo, Xueqi Cheng

In the sketch stage, a skeleton is extracted by removing words which are conflict to the counterfactual condition, from the original ending.

counterfactual Text Generation

Match-Ignition: Plugging PageRank into Transformer for Long-form Text Matching

1 code implementation16 Jan 2021 Liang Pang, Yanyan Lan, Xueqi Cheng

However, these models designed for short texts cannot well address the long-form text matching problem, because there are many contexts in long-form texts can not be directly aligned with each other, and it is difficult for existing models to capture the key matching signals from such noisy data.

Community Question Answering Information Retrieval +5

Transformation Driven Visual Reasoning

1 code implementation CVPR 2021 Xin Hong, Yanyan Lan, Liang Pang, Jiafeng Guo, Xueqi Cheng

Following this definition, a new dataset namely TRANCE is constructed on the basis of CLEVR, including three levels of settings, i. e.~Basic (single-step transformation), Event (multi-step transformation), and View (multi-step transformation with variant views).

Attribute Visual Question Answering (VQA) +1

Beyond Language: Learning Commonsense from Images for Reasoning

1 code implementation Findings of the Association for Computational Linguistics 2020 Wanqing Cui, Yanyan Lan, Liang Pang, Jiafeng Guo, Xueqi Cheng

This paper proposes a novel approach to learn commonsense from images, instead of limited raw texts or costly constructed knowledge bases, for the commonsense reasoning problem in NLP.

Language Modelling Question Answering

Modeling Topical Relevance for Multi-Turn Dialogue Generation

no code implementations27 Sep 2020 Hainan Zhang, Yanyan Lan, Liang Pang, Hongshen Chen, Zhuoye Ding, Dawei Yin

Therefore, an ideal dialogue generation models should be able to capture the topic information of each context, detect the relevant context, and produce appropriate responses accordingly.

Dialogue Generation Sentence

Ranking Enhanced Dialogue Generation

no code implementations13 Aug 2020 Changying Hao, Liang Pang, Yanyan Lan, Fei Sun, Jiafeng Guo, Xue-Qi Cheng

To tackle this problem, we propose a Ranking Enhanced Dialogue generation framework in this paper.

Dialogue Generation Response Generation

Robust Reinforcement Learning with Wasserstein Constraint

no code implementations1 Jun 2020 Linfang Hou, Liang Pang, Xin Hong, Yanyan Lan, Zhi-Ming Ma, Dawei Yin

Robust Reinforcement Learning aims to find the optimal policy with some extent of robustness to environmental dynamics.

reinforcement-learning Reinforcement Learning (RL)

L2R2: Leveraging Ranking for Abductive Reasoning

1 code implementation22 May 2020 Yunchang Zhu, Liang Pang, Yanyan Lan, Xue-Qi Cheng

To fill this gap, we switch to a ranking perspective that sorts the hypotheses in order of their plausibilities.

Language Modelling Learning-To-Rank +1

SetRank: Learning a Permutation-Invariant Ranking Model for Information Retrieval

2 code implementations12 Dec 2019 Liang Pang, Jun Xu, Qingyao Ai, Yanyan Lan, Xue-Qi Cheng, Ji-Rong Wen

In learning-to-rank for information retrieval, a ranking model is automatically learned from the data and then utilized to rank the sets of retrieved documents.

Information Retrieval Learning-To-Rank +1

Continual Match Based Training in Pommerman: Technical Report

no code implementations18 Dec 2018 Peng Peng, Liang Pang, Yufeng Yuan, Chao GAO

We show in the experiments that Pommerman is a perfect environment for studying continual learning, and the agent can improve its performance by continually learning new skills without forgetting the old ones.

Continual Learning

Locally Smoothed Neural Networks

1 code implementation22 Nov 2017 Liang Pang, Yanyan Lan, Jun Xu, Jiafeng Guo, Xue-Qi Cheng

The main idea is to represent the weight matrix of the locally connected layer as the product of the kernel and the smoother, where the kernel is shared over different local receptive fields, and the smoother is for determining the importance and relations of different local receptive fields.

Face Verification Question Answering +1

A Deep Investigation of Deep IR Models

no code implementations24 Jul 2017 Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Xue-Qi Cheng

Therefore, it is necessary to identify the difference between automatically learned features by deep IR models and hand-crafted features used in traditional learning to rank approaches.

Information Retrieval Learning-To-Rank +1

MatchZoo: A Toolkit for Deep Text Matching

1 code implementation23 Jul 2017 Yixing Fan, Liang Pang, Jianpeng Hou, Jiafeng Guo, Yanyan Lan, Xue-Qi Cheng

In recent years, deep neural models have been widely adopted for text matching tasks, such as question answering and information retrieval, showing improved performance as compared with previous methods.

Ad-Hoc Information Retrieval Information Retrieval +3

A Study of MatchPyramid Models on Ad-hoc Retrieval

1 code implementation15 Jun 2016 Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Xue-Qi Cheng

Although ad-hoc retrieval can also be formalized as a text matching task, few deep models have been tested on it.

Machine Translation Paraphrase Identification +4

Match-SRNN: Modeling the Recursive Matching Structure with Spatial RNN

1 code implementation15 Apr 2016 Shengxian Wan, Yanyan Lan, Jun Xu, Jiafeng Guo, Liang Pang, Xue-Qi Cheng

In this paper, we propose to view the generation of the global interaction between two texts as a recursive process: i. e. the interaction of two texts at each position is a composition of the interactions between their prefixes as well as the word level interaction at the current position.

Position

Text Matching as Image Recognition

7 code implementations20 Feb 2016 Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Shengxian Wan, Xue-Qi Cheng

An effective way is to extract meaningful matching patterns from words, phrases, and sentences to produce the matching score.

Ad-Hoc Information Retrieval Text Matching

A Deep Architecture for Semantic Matching with Multiple Positional Sentence Representations

1 code implementation26 Nov 2015 Shengxian Wan, Yanyan Lan, Jiafeng Guo, Jun Xu, Liang Pang, Xue-Qi Cheng

Our model has several advantages: (1) By using Bi-LSTM, rich context of the whole sentence is leveraged to capture the contextualized local information in each positional sentence representation; (2) By matching with multiple positional sentence representations, it is flexible to aggregate different important contextualized local information in a sentence to support the matching; (3) Experiments on different tasks such as question answering and sentence completion demonstrate the superiority of our model.

Information Retrieval Question Answering +3

Combination of Diverse Ranking Models for Personalized Expedia Hotel Searches

no code implementations29 Nov 2013 Xudong Liu, Bing Xu, Yuyu Zhang, Qiang Yan, Liang Pang, Qiang Li, Hanxiao Sun, Bin Wang

The ICDM Challenge 2013 is to apply machine learning to the problem of hotel ranking, aiming to maximize purchases according to given hotel characteristics, location attractiveness of hotels, user's aggregated purchase history and competitive online travel agency information for each potential hotel choice.

BIG-bench Machine Learning Feature Engineering

Cannot find the paper you are looking for? You can Submit a new open access paper.