Search Results for author: Ji-Rong Wen

Found 172 papers, 105 papers with code

Optimal Partial Transport Based Sentence Selection for Long-form Document Matching

1 code implementation COLING 2022 Weijie Yu, Liang Pang, Jun Xu, Bing Su, Zhenhua Dong, Ji-Rong Wen

Enjoying the partial transport properties of OPT, the selected key sentences can not only effectively enhance the matching accuracy, but also be explained as the rationales for the matching results.

Finding the Dominant Winning Ticket in Pre-Trained Language Models

no code implementations Findings (ACL) 2022 Zhuocheng Gong, Di He, Yelong Shen, Tie-Yan Liu, Weizhu Chen, Dongyan Zhao, Ji-Rong Wen, Rui Yan

Empirically, we show that (a) the dominant winning ticket can achieve performance that is comparable with that of the full-parameter model, (b) the dominant winning ticket is transferable across different tasks, (c) and the dominant winning ticket has a natural structure within each parameter matrix.

Semantic Sentence Matching via Interacting Syntax Graphs

no code implementations COLING 2022 Chen Xu, Jun Xu, Zhenhua Dong, Ji-Rong Wen

In this paper, we formalize the task of semantic sentence matching as a problem of graph matching in which each sentence is represented as a directed graph according to its syntactic structures.

Graph Matching

There Are a Thousand Hamlets in a Thousand People’s Eyes: Enhancing Knowledge-grounded Dialogue with Personal Memory

no code implementations ACL 2022 Tingchen Fu, Xueliang Zhao, Chongyang Tao, Ji-Rong Wen, Rui Yan

Knowledge-grounded conversation (KGC) shows great potential in building an engaging and knowledgeable chatbot, and knowledge selection is a key ingredient in it.


Continual Pre-training of Language Models for Math Problem Understanding with Syntax-Aware Memory Network

1 code implementation ACL 2022 Zheng Gong, Kun Zhou, Xin Zhao, Jing Sha, Shijin Wang, Ji-Rong Wen

In this paper, we study how to continually pre-train language models for improving the understanding of math problems.

Optimizing Factual Accuracy in Text Generation through Dynamic Knowledge Selection

no code implementations30 Aug 2023 Hongjin Qian, Zhicheng Dou, Jiejun Tan, Haonan Chen, Haoqi Gu, Ruofei Lai, Xinyu Zhang, Zhao Cao, Ji-Rong Wen

Previous methods use external knowledge as references for text generation to enhance factuality but often struggle with the knowledge mix-up(e. g., entity mismatch) of irrelevant references.

Text Generation

A Survey on Large Language Model based Autonomous Agents

1 code implementation22 Aug 2023 Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, ZhiYuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen

In this paper, we present a comprehensive survey of these studies, delivering a systematic review of the field of LLM-based autonomous agents from a holistic perspective.

Language Modelling Large Language Model

Uncovering User Interest from Biased and Noised Watch Time in Video Recommendation

1 code implementation16 Aug 2023 Haiyuan Zhao, Lei Zhang, Jun Xu, Guohao Cai, Zhenhua Dong, Ji-Rong Wen

In the video recommendation, watch time is commonly adopted as an indicator of user interest.

Large Language Models for Information Retrieval: A Survey

1 code implementation14 Aug 2023 Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Zhicheng Dou, Ji-Rong Wen

This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity).

Information Retrieval Question Answering +2

LTP-MMF: Towards Long-term Provider Max-min Fairness Under Recommendation Feedback Loops

1 code implementation11 Aug 2023 Chen Xu, Xiaopeng Ye, Jun Xu, Xiao Zhang, Weiran Shen, Ji-Rong Wen

RFL means that recommender system can only receive feedback on exposed items from users and update recommender models incrementally based on this feedback.

Fairness Recommendation Systems

Counterfactual Cross-modality Reasoning for Weakly Supervised Video Moment Localization

1 code implementation10 Aug 2023 Zezhong Lv, Bing Su, Ji-Rong Wen

Finally, by suppressing the unimodal effect of masked query, we can rectify the reconstructions of video proposals to perform reasonable contrastive learning.

Contrastive Learning

Synthesizing Long-Term Human Motions with Diffusion Models via Coherent Sampling

1 code implementation3 Aug 2023 Zhao Yang, Bing Su, Ji-Rong Wen

Firstly, they cannot directly generate coherent motions and require additional operations such as interpolation to process the generated actions.

Spatio-Temporal Branching for Motion Prediction using Motion Increments

1 code implementation2 Aug 2023 Jiexin Wang, Yujie Zhou, Wenwen Qiang, Ying Ba, Bing Su, Ji-Rong Wen

Human motion prediction (HMP) has emerged as a popular research topic due to its diverse applications, but it remains a challenging task due to the stochastic and aperiodic nature of future poses.

Human motion prediction Knowledge Distillation +1

Alleviating the Long-Tail Problem in Conversational Recommender Systems

no code implementations21 Jul 2023 Zhipeng Zhao, Kun Zhou, Xiaolei Wang, Wayne Xin Zhao, Fan Pan, Zhao Cao, Ji-Rong Wen

Conversational recommender systems (CRS) aim to provide the recommendation service via natural language conversations.

Recommendation Systems Retrieval

Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation

1 code implementation20 Jul 2023 Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang

In this study, we present an initial analysis of the factual knowledge boundaries of LLMs and how retrieval augmentation affects LLMs on open-domain QA.

Open-Domain Question Answering Retrieval

Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study

1 code implementation16 Jul 2023 Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen

Different from previous studies focused on overall performance, this work aims to investigate the impact of quantization on \emph{emergent abilities}, which are important characteristics that distinguish LLMs from small language models.

Instruction Following Quantization

RETA-LLM: A Retrieval-Augmented Large Language Model Toolkit

1 code implementation8 Jun 2023 Jiongnan Liu, Jiajie Jin, Zihan Wang, Jiehan Cheng, Zhicheng Dou, Ji-Rong Wen

To support research in this area and facilitate the development of retrieval-augmented LLM systems, we develop RETA-LLM, a {RET}reival-{A}ugmented LLM toolkit.

Answer Generation Fact Checking +4

Improving Conversational Recommendation Systems via Counterfactual Data Simulation

1 code implementation5 Jun 2023 Xiaolei Wang, Kun Zhou, Xinyu Tang, Wayne Xin Zhao, Fan Pan, Zhao Cao, Ji-Rong Wen

To develop our approach, we characterize user preference and organize the conversation flow by the entities involved in the dialogue, and design a multi-stage recommendation dialogue simulator based on a conversation flow language model.

Data Augmentation Language Modelling +1

When Large Language Model based Agent Meets User Behavior Analysis: A Novel User Simulation Paradigm

1 code implementation5 Jun 2023 Lei Wang, Jingsen Zhang, Hao Yang, ZhiYuan Chen, Jiakai Tang, Zeyu Zhang, Xu Chen, Yankai Lin, Ruihua Song, Wayne Xin Zhao, Jun Xu, Zhicheng Dou, Jun Wang, Ji-Rong Wen

We argue that these models present significant opportunities for reliable user simulation, and have the potential to revolutionize traditional study paradigms in user behavior analysis.

Language Modelling Large Language Model +2

Evaluating and Improving Tool-Augmented Computation-Intensive Math Reasoning

1 code implementation4 Jun 2023 Beichen Zhang, Kun Zhou, Xilin Wei, Wayne Xin Zhao, Jing Sha, Shijin Wang, Ji-Rong Wen

Based on this finding, we propose a new approach that can deliberate the reasoning steps with tool interfaces, namely \textbf{DELI}.

Zero-shot Visual Question Answering with Language Model Feedback

1 code implementation26 May 2023 Yifan Du, Junyi Li, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen

In this paper, we propose a novel language model guided captioning approach, LAMOC, for knowledge-based visual question answering (VQA).

Language Modelling Question Answering +1

ChatCoT: Tool-Augmented Chain-of-Thought Reasoning on Chat-based Large Language Models

1 code implementation23 May 2023 Zhipeng Chen, Kun Zhou, Beichen Zhang, Zheng Gong, Wayne Xin Zhao, Ji-Rong Wen

To improve the reasoning abilities, we propose \textbf{ChatCoT}, a tool-augmented chain-of-thought reasoning framework for chat-based LLMs.

Rethinking the Evaluation for Conversational Recommendation in the Era of Large Language Models

1 code implementation22 May 2023 Xiaolei Wang, Xinyu Tang, Wayne Xin Zhao, Jingyuan Wang, Ji-Rong Wen

The recent success of large language models (LLMs) has shown great potential to develop more powerful conversational recommender systems (CRSs), which rely on natural language conversations to satisfy user needs.

Explanation Generation Recommendation Systems

HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large Language Models

2 code implementations19 May 2023 Junyi Li, Xiaoxue Cheng, Wayne Xin Zhao, Jian-Yun Nie, Ji-Rong Wen

To understand what types of content and to which extent LLMs are apt to hallucinate, we introduce the Hallucination Evaluation for Large Language Models (HaluEval) benchmark, a large collection of generated and human-annotated hallucinated samples for evaluating the performance of LLMs in recognizing hallucination.

When Search Meets Recommendation: Learning Disentangled Search Representation for Recommendation

1 code implementation18 May 2023 Zihua Si, Zhongxiang Sun, Xiao Zhang, Jun Xu, Xiaoxue Zang, Yang song, Kun Gai, Ji-Rong Wen

In our paper, we propose a Search-Enhanced framework for the Sequential Recommendation (SESRec) that leverages users' search interests for recommendation, by disentangling similar and dissimilar representations within S&R behaviors.

Contrastive Learning Disentanglement +1

The Web Can Be Your Oyster for Improving Large Language Models

1 code implementation18 May 2023 Junyi Li, Tianyi Tang, Wayne Xin Zhao, Jingyuan Wang, Jian-Yun Nie, Ji-Rong Wen

In order to further improve the capacity of LLMs for knowledge-intensive tasks, we consider augmenting LLMs with the large-scale web using search engine.


TOME: A Two-stage Approach for Model-based Retrieval

no code implementations18 May 2023 Ruiyang Ren, Wayne Xin Zhao, Jing Liu, Hua Wu, Ji-Rong Wen, Haifeng Wang

Recently, model-based retrieval has emerged as a new paradigm in text retrieval that discards the index in the traditional retrieval model and instead memorizes the candidate corpora using model parameters.

Natural Questions Retrieval +1

Evaluating Object Hallucination in Large Vision-Language Models

2 code implementations17 May 2023 YiFan Li, Yifan Du, Kun Zhou, Jinpeng Wang, Wayne Xin Zhao, Ji-Rong Wen

Despite the promising progress on LVLMs, we find that LVLMs suffer from the hallucination problem, i. e. they tend to generate objects that are inconsistent with the target images in the descriptions.

StructGPT: A General Framework for Large Language Model to Reason over Structured Data

1 code implementation16 May 2023 Jinhao Jiang, Kun Zhou, Zican Dong, Keming Ye, Wayne Xin Zhao, Ji-Rong Wen

Specially, we propose an \emph{invoking-linearization-generation} procedure to support LLMs in reasoning on the structured data with the help of the external interfaces.

Language Modelling Large Language Model +1

Recommendation as Instruction Following: A Large Language Model Empowered Recommendation Approach

no code implementations11 May 2023 Junjie Zhang, Ruobing Xie, Yupeng Hou, Wayne Xin Zhao, Leyu Lin, Ji-Rong Wen

Inspired by the recent progress on large language models (LLMs), we take a different approach to developing the recommendation models, considering recommendation as instruction following by LLMs.

Instruction Following Language Modelling +2

Diffusion-NAT: Self-Prompting Discrete Diffusion for Non-Autoregressive Text Generation

no code implementations6 May 2023 Kun Zhou, YiFan Li, Wayne Xin Zhao, Ji-Rong Wen

To solve it, we propose Diffusion-NAT, which introduces discrete diffusion models~(DDM) into NAR text-to-text generation and integrates BART to improve the performance.

Denoising Text Generation

GlyphDiffusion: Text Generation as Image Generation

no code implementations25 Apr 2023 Junyi Li, Wayne Xin Zhao, Jian-Yun Nie, Ji-Rong Wen

In this way, conditional text generation can be cast as a glyph image generation task, and it is then natural to apply continuous diffusion models to discrete texts.

Conditional Text Generation Glyph Image Generation +2

EulerNet: Adaptive Feature Interaction Learning via Euler's Formula for CTR Prediction

1 code implementation21 Apr 2023 Zhen Tian, Ting Bai, Wayne Xin Zhao, Ji-Rong Wen, Zhao Cao

EulerNet converts the exponential powers of feature interactions into simple linear combinations of the modulus and phase of the complex features, making it possible to adaptively learn the high-order feature interactions in an efficient way.

Click-Through Rate Prediction

WebBrain: Learning to Generate Factually Correct Articles for Queries by Grounding on Large Web Corpus

1 code implementation10 Apr 2023 Hongjing Qian, Yutao Zhu, Zhicheng Dou, Haoqi Gu, Xinyu Zhang, Zheng Liu, Ruofei Lai, Zhao Cao, Jian-Yun Nie, Ji-Rong Wen

In this paper, we introduce a new NLP task -- generating short factual articles with references for queries by mining supporting evidence from the Web.

Retrieval Text Generation

A Survey of Large Language Models

1 code implementation31 Mar 2023 Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, YiFan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, Ji-Rong Wen

To discriminate the difference in parameter scale, the research community has coined the term large language models (LLM) for the PLMs of significant size.

Language Modelling

Scaling Pre-trained Language Models to Deeper via Parameter-efficient Architecture

no code implementations27 Mar 2023 Peiyu Liu, Ze-Feng Gao, Yushuo Chen, Wayne Xin Zhao, Ji-Rong Wen

Based on such a decomposition, our architecture shares the central tensor across all layers for reducing the model size and meanwhile keeps layer-specific auxiliary tensors (also using adapters) for enhancing the adaptation flexibility.

Dually Enhanced Propensity Score Estimation in Sequential Recommendation

1 code implementation15 Mar 2023 Chen Xu, Jun Xu, Xu Chen, Zhenghua Dong, Ji-Rong Wen

According to the graph, two complementary propensity scores are estimated from the views of item and user, respectively, based on the same set of user feedback data.

Sequential Recommendation

Diffusion Models for Non-autoregressive Text Generation: A Survey

1 code implementation12 Mar 2023 YiFan Li, Kun Zhou, Wayne Xin Zhao, Ji-Rong Wen

In this survey, we review the recent progress in diffusion models for NAR text generation.

Text Generation

TextBox 2.0: A Text Generation Library with Pre-trained Language Models

1 code implementation26 Dec 2022 Tianyi Tang, Junyi Li, Zhipeng Chen, Yiwen Hu, Zhuohao Yu, Wenxun Dai, Zican Dong, Xiaoxue Cheng, Yuhao Wang, Wayne Xin Zhao, Jian-Yun Nie, Ji-Rong Wen

To facilitate research on text generation, this paper presents a comprehensive and unified library, TextBox 2. 0, focusing on the use of pre-trained language models (PLMs).

Abstractive Text Summarization Data-to-Text Generation +7

Visually-augmented pretrained language models for NLP tasks without images

1 code implementation15 Dec 2022 Hangyu Guo, Kun Zhou, Wayne Xin Zhao, Qinyu Zhang, Ji-Rong Wen

Although pre-trained language models~(PLMs) have shown impressive performance by text-only self-supervised training, they are found lack of visual semantics or commonsense.


MASTER: Multi-task Pre-trained Bottlenecked Masked Autoencoders are Better Dense Retrievers

1 code implementation15 Dec 2022 Kun Zhou, Xiao Liu, Yeyun Gong, Wayne Xin Zhao, Daxin Jiang, Nan Duan, Ji-Rong Wen

Pre-trained Transformers (\eg BERT) have been commonly used in existing dense retrieval methods for parameter initialization, and recent studies are exploring more effective pre-training tasks for further improving the quality of dense vectors.

Passage Retrieval Retrieval

UniKGQA: Unified Retrieval and Reasoning for Solving Multi-hop Question Answering Over Knowledge Graph

1 code implementation2 Dec 2022 Jinhao Jiang, Kun Zhou, Wayne Xin Zhao, Ji-Rong Wen

Multi-hop Question Answering over Knowledge Graph~(KGQA) aims to find the answer entities that are multiple hops away from the topic entities mentioned in a natural language question on a large-scale Knowledge Graph (KG).

Language Modelling Multi-hop Question Answering +2

CDSM: Cascaded Deep Semantic Matching on Textual Graphs Leveraging Ad-hoc Neighbor Selection

1 code implementation30 Nov 2022 Jing Yao, Zheng Liu, Junhan Yang, Zhicheng Dou, Xing Xie, Ji-Rong Wen

In the first stage, a lightweight CNN-based ad-hod neighbor selector is deployed to filter useful neighbors for the matching task with a small computation cost.

Recent Advances in RecBole: Extensions with more Practical Considerations

1 code implementation28 Nov 2022 Lanling Xu, Zhen Tian, Gaowei Zhang, Lei Wang, Junjie Zhang, Bowen Zheng, YiFan Li, Yupeng Hou, Xingyu Pan, Yushuo Chen, Wayne Xin Zhao, Xu Chen, Ji-Rong Wen

In order to show the recent update in RecBole, we write this technical report to introduce our latest improvements on RecBole.

Dense Text Retrieval based on Pretrained Language Models: A Survey

2 code implementations27 Nov 2022 Wayne Xin Zhao, Jing Liu, Ruiyang Ren, Ji-Rong Wen

With powerful PLMs, we can effectively learn the representations of queries and texts in the latent representation space, and further construct the semantic matching function between the dense vectors for relevance modeling.

Retrieval Text Retrieval

Directed Acyclic Graph Factorization Machines for CTR Prediction via Knowledge Distillation

1 code implementation21 Nov 2022 Zhen Tian, Ting Bai, Zibin Zhang, Zhiyuan Xu, Kangyi Lin, Ji-Rong Wen, Wayne Xin Zhao

Some recent knowledge distillation based methods transfer knowledge from complex teacher models to shallow student models for accelerating the online model inference.

Click-Through Rate Prediction Knowledge Distillation +1

SimANS: Simple Ambiguous Negatives Sampling for Dense Text Retrieval

1 code implementation21 Oct 2022 Kun Zhou, Yeyun Gong, Xiao Liu, Wayne Xin Zhao, Yelong Shen, Anlei Dong, Jingwen Lu, Rangan Majumder, Ji-Rong Wen, Nan Duan, Weizhu Chen

Thus, we propose a simple ambiguous negatives sampling method, SimANS, which incorporates a new sampling probability distribution to sample more ambiguous negatives.

Retrieval Text Retrieval

Privacy-Preserved Neural Graph Similarity Learning

1 code implementation21 Oct 2022 Yupeng Hou, Wayne Xin Zhao, Yaliang Li, Ji-Rong Wen

To develop effective and efficient graph similarity learning (GSL) models, a series of data-driven neural algorithms have been proposed in recent years.

Graph Matching Graph Similarity +1

Law Article-Enhanced Legal Case Matching: a Causal Learning Approach

1 code implementation20 Oct 2022 Zhongxiang Sun, Jun Xu, Xiao Zhang, Zhenhua Dong, Ji-Rong Wen

We show that the framework is model-agnostic, and a number of legal case matching models can be applied as the underlying models.

Semantic Text Matching Text Matching

Partial Information as Full: Reward Imputation with Sketching in Bandits

no code implementations13 Oct 2022 Xiao Zhang, Ninglu Shao, Zihua Si, Jun Xu, Wenhan Wang, Hanjing Su, Ji-Rong Wen

In this paper, we propose an efficient reward imputation approach using sketching for CBB, which completes the unobserved rewards with the imputed rewards approximating the full-information feedbacks.

Imputation regression

Modeling Multiple Views via Implicitly Preserving Global Consistency and Local Complementarity

2 code implementations16 Sep 2022 Jiangmeng Li, Wenwen Qiang, Changwen Zheng, Bing Su, Farid Razzak, Ji-Rong Wen, Hui Xiong

To this end, we propose a methodology, specifically consistency and complementarity network (CoCoNet), which avails of strict global inter-view consistency and local cross-view complementarity preserving regularization to comprehensively learn representations from multiple views.

Representation Learning Self-Supervised Learning

A Molecular Multimodal Foundation Model Associating Molecule Graphs with Natural Language

1 code implementation12 Sep 2022 Bing Su, Dazhao Du, Zhao Yang, Yujie Zhou, Jiangmeng Li, Anyi Rao, Hao Sun, Zhiwu Lu, Ji-Rong Wen

Although artificial intelligence (AI) has made significant progress in understanding molecules in a wide range of fields, existing models generally acquire the single cognitive ability from the single molecular modality.

Contrastive Learning Cross-Modal Retrieval +4

Enhancing User Behavior Sequence Modeling by Generative Tasks for Session Search

1 code implementation23 Aug 2022 Haonan Chen, Zhicheng Dou, Yutao Zhu, Zhao Cao, Xiaohua Cheng, Ji-Rong Wen

To help the encoding of the current user behavior sequence, we propose to use a decoder and the information of future sequences and a supplemental query.

Session Search

Ultron: An Ultimate Retriever on Corpus with a Model-based Indexer

no code implementations19 Aug 2022 Yujia Zhou, Jing Yao, Zhicheng Dou, Ledell Wu, Peitian Zhang, Ji-Rong Wen

In order to unify these two stages, we explore a model-based indexer for document retrieval.


Modeling Two-Way Selection Preference for Person-Job Fit

1 code implementation18 Aug 2022 Chen Yang, Yupeng Hou, Yang song, Tao Zhang, Ji-Rong Wen, Wayne Xin Zhao

To model the two-way selection preference from the dual-perspective of job seekers and employers, we incorporate two different nodes for each candidate (or job) and characterize both successful matching and failed matching via a unified dual-perspective interaction graph.

Contrastive Learning Graph Representation Learning +1

Multimodal foundation models are better simulators of the human brain

1 code implementation17 Aug 2022 Haoyu Lu, Qiongyi Zhou, Nanyi Fei, Zhiwu Lu, Mingyu Ding, Jingyuan Wen, Changde Du, Xin Zhao, Hao Sun, Huiguang He, Ji-Rong Wen

Further, from the perspective of neural encoding (based on our foundation model), we find that both visual and lingual encoders trained multimodally are more brain-like compared with unimodal ones.

STAR-GNN: Spatial-Temporal Video Representation for Content-based Retrieval

no code implementations15 Aug 2022 Guoping Zhao, Bingqing Zhang, Mingyu Zhang, Yaxian Li, Jiajun Liu, Ji-Rong Wen

It models a video with a lattice feature graph in which the nodes represent regions of different granularity, with weighted edges that represent the spatial and temporal links.

Representation Learning Retrieval +1

MVP: Multi-task Supervised Pre-training for Natural Language Generation

2 code implementations24 Jun 2022 Tianyi Tang, Junyi Li, Wayne Xin Zhao, Ji-Rong Wen

Motivated by the success of supervised pre-training, we propose Multi-task superVised Pre-training (MVP) for natural language generation.

Text Generation

Towards Unified Conversational Recommender Systems via Knowledge-Enhanced Prompt Learning

1 code implementation19 Jun 2022 Xiaolei Wang, Kun Zhou, Ji-Rong Wen, Wayne Xin Zhao

Our approach unifies the recommendation and conversation subtasks into the prompt learning paradigm, and utilizes knowledge-enhanced prompts based on a fixed pre-trained language model (PLM) to fulfill both subtasks in a unified approach.

Language Modelling Recommendation Systems +1

RecBole 2.0: Towards a More Up-to-Date Recommendation Library

2 code implementations15 Jun 2022 Wayne Xin Zhao, Yupeng Hou, Xingyu Pan, Chen Yang, Zeyu Zhang, Zihan Lin, Jingsen Zhang, Shuqing Bian, Jiakai Tang, Wenqi Sun, Yushuo Chen, Lanling Xu, Gaowei Zhang, Zhen Tian, Changxin Tian, Shanlei Mu, Xinyan Fan, Xu Chen, Ji-Rong Wen

In order to support the study of recent advances in recommender systems, this paper presents an extended recommendation library consisting of eight packages for up-to-date topics and architectures.

Benchmarking Data Augmentation +3

JiuZhang: A Chinese Pre-trained Language Model for Mathematical Problem Understanding

1 code implementation13 Jun 2022 Wayne Xin Zhao, Kun Zhou, Zheng Gong, Beichen Zhang, Yuanhang Zhou, Jing Sha, Zhigang Chen, Shijin Wang, Cong Liu, Ji-Rong Wen

Considering the complex nature of mathematical texts, we design a novel curriculum pre-training approach for improving the learning of mathematical PLMs, consisting of both basic and advanced courses.

Language Modelling

Towards Universal Sequence Representation Learning for Recommender Systems

1 code implementation13 Jun 2022 Yupeng Hou, Shanlei Mu, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen

In order to develop effective sequential recommenders, a series of sequence representation learning (SRL) methods are proposed to model historical user behaviors.

Recommendation Systems Representation Learning

Feature-aware Diversified Re-ranking with Disentangled Representations for Relevant Recommendation

no code implementations10 Jun 2022 Zihan Lin, Hui Wang, Jingshu Mao, Wayne Xin Zhao, Cheng Wang, Peng Jiang, Ji-Rong Wen

Relevant recommendation is a special recommendation scenario which provides relevant items when users express interests on one target item (e. g., click, like and purchase).

Disentanglement Re-Ranking

Negative Sampling for Contrastive Representation Learning: A Review

no code implementations1 Jun 2022 Lanling Xu, Jianxun Lian, Wayne Xin Zhao, Ming Gong, Linjun Shou, Daxin Jiang, Xing Xie, Ji-Rong Wen

The learn-to-compare paradigm of contrastive representation learning (CRL), which compares positive samples with negative ones for representation learning, has achieved great success in a wide range of domains, including natural language processing, computer vision, information retrieval and graph learning.

Graph Learning Information Retrieval +2

Learning to Transfer Prompts for Text Generation

1 code implementation NAACL 2022 Junyi Li, Tianyi Tang, Jian-Yun Nie, Ji-Rong Wen, Wayne Xin Zhao

First, PTG learns a set of source prompts for various source generation tasks and then transfers these prompts as target prompts to perform target generation tasks.

Text Generation

Debiased Contrastive Learning of Unsupervised Sentence Representations

1 code implementation ACL 2022 Kun Zhou, Beichen Zhang, Wayne Xin Zhao, Ji-Rong Wen

In DCLR, we design an instance weighting method to punish false negatives and generate noise-based negatives to guarantee the uniformity of the representation space.

Contrastive Learning Semantic Textual Similarity

A Thorough Examination on Zero-shot Dense Retrieval

no code implementations27 Apr 2022 Ruiyang Ren, Yingqi Qu, Jing Liu, Wayne Xin Zhao, Qifei Wu, Yuchen Ding, Hua Wu, Haifeng Wang, Ji-Rong Wen

Recent years have witnessed the significant advance in dense retrieval (DR) based on powerful pre-trained language models (PLM).


COTS: Collaborative Two-Stream Vision-Language Pre-Training Model for Cross-Modal Retrieval

no code implementations CVPR 2022 Haoyu Lu, Nanyi Fei, Yuqi Huo, Yizhao Gao, Zhiwu Lu, Ji-Rong Wen

Under a fair comparison setting, our COTS achieves the highest performance among all two-stream methods and comparable performance (but with 10, 800X faster in inference) w. r. t.

Contrastive Learning Cross-Modal Retrieval +5

Leveraging Search History for Improving Person-Job Fit

no code implementations27 Mar 2022 Yupeng Hou, Xingyu Pan, Wayne Xin Zhao, Shuqing Bian, Yang song, Tao Zhang, Ji-Rong Wen

As the core technique of online recruitment platforms, person-job fit can improve hiring efficiency by accurately matching job positions with qualified candidates.

Text Matching

Learning to Answer Questions in Dynamic Audio-Visual Scenarios

1 code implementation CVPR 2022 Guangyao Li, Yake Wei, Yapeng Tian, Chenliang Xu, Ji-Rong Wen, Di Hu

In this paper, we focus on the Audio-Visual Question Answering (AVQA) task, which aims to answer questions regarding different visual objects, sounds, and their associations in videos.

audio-visual learning Audio-visual Question Answering +4

MISC: A MIxed Strategy-Aware Model Integrating COMET for Emotional Support Conversation

1 code implementation ACL 2022 Quan Tu, Yanran Li, Jianwei Cui, Bin Wang, Ji-Rong Wen, Rui Yan

Applying existing methods to emotional support conversation -- which provides valuable assistance to people who are in need -- has two major limitations: (a) they generally employ a conversation-level emotion label, which is too coarse-grained to capture user's instant mental state; (b) most of them focus on expressing empathy in the response(s) rather than gradually reducing user's distress.

Neural Graph Matching for Pre-training Graph Neural Networks

1 code implementation3 Mar 2022 Yupeng Hou, Binbin Hu, Wayne Xin Zhao, Zhiqiang Zhang, Jun Zhou, Ji-Rong Wen

In this way, we can learn adaptive representations for a given graph when paired with different graphs, and both node- and graph-level characteristics are naturally considered in a single pre-training task.

Graph Matching

Parameter-Efficient Mixture-of-Experts Architecture for Pre-trained Language Models

2 code implementations COLING 2022 Ze-Feng Gao, Peiyu Liu, Wayne Xin Zhao, Zhong-Yi Lu, Ji-Rong Wen

Recently, Mixture-of-Experts (short as MoE) architecture has achieved remarkable success in increasing the model capacity of large-scale language models.

Language Modelling Multi-Task Learning +2

Filter-enhanced MLP is All You Need for Sequential Recommendation

1 code implementation28 Feb 2022 Kun Zhou, Hui Yu, Wayne Xin Zhao, Ji-Rong Wen

Recently, deep neural networks such as RNN, CNN and Transformer have been applied in the task of sequential recommendation, which aims to capture the dynamic preference characteristics from logged user behavior data for accurate recommendation.

Sequential Recommendation

Measuring "Why" in Recommender Systems: a Comprehensive Survey on the Evaluation of Explainable Recommendation

no code implementations14 Feb 2022 Xu Chen, Yongfeng Zhang, Ji-Rong Wen

Beyond summarizing the previous work, we also analyze the (dis)advantages of existing evaluation methods and provide a series of guidelines on how to select them.

Explainable Recommendation Recommendation Systems

A Model-Agnostic Causal Learning Framework for Recommendation using Search Data

1 code implementation9 Feb 2022 Zihua Si, Xueran Han, Xiao Zhang, Jun Xu, Yue Yin, Yang song, Ji-Rong Wen

In this paper, we propose a model-agnostic framework named IV4Rec that can effectively decompose the embedding vectors into these two parts, hence enhancing recommendation results.

Recommendation Systems

Convolutional Neural Networks on Graphs with Chebyshev Approximation, Revisited

1 code implementation4 Feb 2022 Mingguo He, Zhewei Wei, Ji-Rong Wen

GPR-GNN and BernNet demonstrate that the Monomial and Bernstein bases also outperform the Chebyshev basis in terms of learning the spectral graph convolutions.

GPR Graph Learning +1

Context-Tuning: Learning Contextualized Prompts for Natural Language Generation

1 code implementation COLING 2022 Tianyi Tang, Junyi Li, Wayne Xin Zhao, Ji-Rong Wen

Secondly, we use continuous inverse prompting to improve the process of natural language generation by modeling an inverse generation process from output to input, making the generated text more relevant to the inputs.

Text Generation

Pretrained Language Models for Text Generation: A Survey

no code implementations14 Jan 2022 Junyi Li, Tianyi Tang, Wayne Xin Zhao, Jian-Yun Nie, Ji-Rong Wen

We begin with introducing three key aspects of applying PLMs to text generation: 1) how to encode the input into representations preserving input semantics which can be fused into PLMs; 2) how to design an effective PLM to serve as the generation model; and 3) how to effectively optimize PLMs given the reference text and to ensure that the generated texts satisfy special text properties.

Text Generation

Class-aware Sounding Objects Localization via Audiovisual Correspondence

1 code implementation22 Dec 2021 Di Hu, Yake Wei, Rui Qian, Weiyao Lin, Ruihua Song, Ji-Rong Wen

To address this problem, we propose a two-stage step-by-step learning framework to localize and recognize sounding objects in complex audiovisual scenarios using only the correspondence between audio and vision.

object-detection Object Detection +2

Compressed Video Contrastive Learning

no code implementations NeurIPS 2021 Yuqi Huo, Mingyu Ding, Haoyu Lu, Nanyi Fei, Zhiwu Lu, Ji-Rong Wen, Ping Luo

To enhance the representation ability of the motion vectors, hence the effectiveness of our method, we design a cross guidance contrastive learning algorithm based on multi-instance InfoNCE loss, where motion vectors can take supervision signals from RGB frames and vice versa.

Contrastive Learning Representation Learning

PSSL: Self-supervised Learning for Personalized Search with Contrastive Sampling

1 code implementation24 Nov 2021 Yujia Zhou, Zhicheng Dou, Yutao Zhu, Ji-Rong Wen

Personalized search plays a crucial role in improving user search experience owing to its ability to build user profiles based on historical behaviors.

Self-Supervised Learning

Towards artificial general intelligence via a multimodal foundation model

1 code implementation27 Oct 2021 Nanyi Fei, Zhiwu Lu, Yizhao Gao, Guoxing Yang, Yuqi Huo, Jingyuan Wen, Haoyu Lu, Ruihua Song, Xin Gao, Tao Xiang, Hao Sun, Ji-Rong Wen

To overcome this limitation and take a solid step towards artificial general intelligence (AGI), we develop a foundation model pre-trained with huge multimodal data, which can be quickly adapted for various downstream cognitive tasks.

Image Classification Reading Comprehension +2

Image Dataset Compression Based on Matrix Product States

no code implementations29 Sep 2021 Ze-Feng Gao, Peiyu Liu, Xiao-Hui Zhang, Xin Zhao, Z. Y. Xie, Zhong-Yi Lu, Ji-Rong Wen

Based on the MPS structure, we propose a new dataset compression method that compresses datasets by filtering long-range correlation information in task-agnostic scenarios and uses dataset distillation to supplement the information in task-specific scenarios.

Log-Polar Space Convolution

no code implementations29 Sep 2021 Bing Su, Ji-Rong Wen

Convolutional neural networks use regular quadrilateral convolution kernels to extract features.

One Chatbot Per Person: Creating Personalized Chatbots based on Implicit User Profiles

1 code implementation20 Aug 2021 Zhengyi Ma, Zhicheng Dou, Yutao Zhu, Hanxun Zhong, Ji-Rong Wen

Specifically, leveraging the benefits of Transformer on language understanding, we train a personalized language model to construct a general user profile from the user's historical responses.

Chatbot Language Modelling

Pre-training for Ad-hoc Retrieval: Hyperlink is Also You Need

1 code implementation20 Aug 2021 Zhengyi Ma, Zhicheng Dou, Wei Xu, Xinyu Zhang, Hao Jiang, Zhao Cao, Ji-Rong Wen

In this paper, we propose to leverage the large-scale hyperlinks and anchor texts to pre-train the language model for ad-hoc retrieval.

Language Modelling Retrieval

Learning Implicit User Profiles for Personalized Retrieval-Based Chatbot

1 code implementation18 Aug 2021 Hongjin Qian, Zhicheng Dou, Yutao Zhu, Yueyuan Ma, Ji-Rong Wen

To learn a user's personalized language style, we elaborately build language models from shallow to deep using the user's historical responses; To model a user's personalized preferences, we explore the conditional relations underneath each post-response pair of the user.

Chatbot Retrieval

Modeling Relevance Ranking under the Pre-training and Fine-tuning Paradigm

no code implementations12 Aug 2021 Lin Bo, Liang Pang, Gang Wang, Jun Xu, Xiuqiang He, Ji-Rong Wen

Experimental results base on three publicly available benchmarks showed that in both of the implementations, Pre-Rank can respectively outperform the underlying ranking models and achieved state-of-the-art performances.

Document Ranking Information Retrieval +3

Self-supervised Audiovisual Representation Learning for Remote Sensing Data

1 code implementation2 Aug 2021 Konrad Heidler, Lichao Mou, Di Hu, Pu Jin, Guangyao Li, Chuang Gan, Ji-Rong Wen, Xiao Xiang Zhu

By fine-tuning the models on a number of commonly used remote sensing datasets, we show that our approach outperforms existing pre-training strategies for remote sensing imagery.

Representation Learning Transfer Learning

A Pre-training Strategy for Zero-Resource Response Selection in Knowledge-Grounded Conversations

no code implementations ACL 2021 Chongyang Tao, Changyu Chen, Jiazhan Feng, Ji-Rong Wen, Rui Yan

Recently, many studies are emerging towards building a retrieval-based dialogue system that is able to effectively leverage background knowledge (e. g., documents) when conversing with humans.

Language Modelling Retrieval

Log-Polar Space Convolution for Convolutional Neural Networks

1 code implementation26 Jul 2021 Bing Su, Ji-Rong Wen

Convolutional neural networks use regular quadrilateral convolution kernels to extract features.

Curriculum Pre-Training Heterogeneous Subgraph Transformer for Top-$N$ Recommendation

no code implementations12 Jun 2021 Hui Wang, Kun Zhou, Wayne Xin Zhao, Jingyuan Wang, Ji-Rong Wen

Due to the flexibility in modelling data heterogeneity, heterogeneous information network (HIN) has been adopted to characterize complex and heterogeneous auxiliary data in top-$N$ recommender systems, called \emph{HIN-based recommendation}.

Recommendation Systems

A Joint Model for Dropped Pronoun Recovery and Conversational Discourse Parsing in Chinese Conversational Speech

1 code implementation ACL 2021 Jingxuan Yang, Kerui Xu, Jun Xu, Si Li, Sheng Gao, Jun Guo, Nianwen Xue, Ji-Rong Wen

A second (multi-relational) GCN is then applied to the utterance states to produce a discourse relation-augmented representation for the utterances, which are then fused together with token states in each utterance as input to a dropped pronoun recovery layer.

Discourse Parsing

Enabling Lightweight Fine-tuning for Pre-trained Language Model Compression based on Matrix Product Operators

1 code implementation ACL 2021 Peiyu Liu, Ze-Feng Gao, Wayne Xin Zhao, Z. Y. Xie, Zhong-Yi Lu, Ji-Rong Wen

This paper presents a novel pre-trained language models (PLM) compression approach based on the matrix product operator (short as MPO) from quantum many-body physics.

Language Modelling Model Compression

Pretrained Language Models for Text Generation: A Survey

no code implementations21 May 2021 Junyi Li, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen

In this paper, we present an overview of the major advances achieved in the topic of PLMs for text generation.

Text Generation

Knowledge-based Review Generation by Coherence Enhanced Text Planning

no code implementations9 May 2021 Junyi Li, Wayne Xin Zhao, Zhicheng Wei, Nicholas Jing Yuan, Ji-Rong Wen

For global coherence, we design a hierarchical self-attentive architecture with both subgraph- and node-level attention to enhance the correlations between subgraphs.

Informativeness Knowledge Graphs +2

Improving Multi-hop Knowledge Base Question Answering by Learning Intermediate Supervision Signals

1 code implementation11 Jan 2021 Gaole He, Yunshi Lan, Jing Jiang, Wayne Xin Zhao, Ji-Rong Wen

In our approach, the student network aims to find the correct answer to the query, while the teacher network tries to learn intermediate supervision signals for improving the reasoning capacity of the student network.

Knowledge Base Question Answering Semantic Parsing

TextBox: A Unified, Modularized, and Extensible Framework for Text Generation

1 code implementation ACL 2021 Junyi Li, Tianyi Tang, Gaole He, Jinhao Jiang, Xiaoxuan Hu, Puzhao Xie, Zhipeng Chen, Zhuohao Yu, Wayne Xin Zhao, Ji-Rong Wen

In this paper, we release an open-source library, called TextBox, to provide a unified, modularized, and extensible text generation framework.

Text Generation

Self-Supervised Video Representation Learning with Constrained Spatiotemporal Jigsaw

no code implementations1 Jan 2021 Yuqi Huo, Mingyu Ding, Haoyu Lu, Zhiwu Lu, Tao Xiang, Ji-Rong Wen, Ziyuan Huang, Jianwen Jiang, Shiwei Zhang, Mingqian Tang, Songfang Huang, Ping Luo

With the constrained jigsaw puzzles, instead of solving them directly, which could still be extremely hard, we carefully design four surrogate tasks that are more solvable but meanwhile still ensure that the learned representation is sensitive to spatiotemporal continuity at both the local and global levels.

Representation Learning

RecBole: Towards a Unified, Comprehensive and Efficient Framework for Recommendation Algorithms

1 code implementation3 Nov 2020 Wayne Xin Zhao, Shanlei Mu, Yupeng Hou, Zihan Lin, Yushuo Chen, Xingyu Pan, Kaiyuan Li, Yujie Lu, Hui Wang, Changxin Tian, Yingqian Min, Zhichao Feng, Xinyan Fan, Xu Chen, Pengfei Wang, Wendi Ji, Yaliang Li, Xiaoling Wang, Ji-Rong Wen

In this library, we implement 73 recommendation models on 28 benchmark datasets, covering the categories of general recommendation, sequential recommendation, context-aware recommendation and knowledge-based recommendation.

Collaborative Filtering Sequential Recommendation

Scalable Graph Neural Networks via Bidirectional Propagation

1 code implementation NeurIPS 2020 Ming Chen, Zhewei Wei, Bolin Ding, Yaliang Li, Ye Yuan, Xiaoyong Du, Ji-Rong Wen

Most notably, GBP can deliver superior performance on a graph with over 60 million nodes and 1. 8 billion edges in less than half an hour on a single machine.

Graph Sampling

Transformer-GCRF: Recovering Chinese Dropped Pronouns with General Conditional Random Fields

1 code implementation Findings of the Association for Computational Linguistics 2020 Jingxuan Yang, Kerui Xu, Jun Xu, Si Li, Sheng Gao, Jun Guo, Ji-Rong Wen, Nianwen Xue

Exploratory analysis also demonstrates that the GCRF did help to capture the dependencies between pronouns in neighboring utterances, thus contributes to the performance improvements.

Machine Translation Translation

Pchatbot: A Large-Scale Dataset for Personalized Chatbot

2 code implementations28 Sep 2020 Hongjin Qian, Xiaohe Li, Hanxun Zhong, Yu Guo, Yueyuan Ma, Yutao Zhu, Zhanliang Liu, Zhicheng Dou, Ji-Rong Wen

This enables the development of personalized dialogue models that directly learn implicit user personality from the user's dialogue history.


Learning to Match Jobs with Resumes from Sparse Interaction Data using Multi-View Co-Teaching Network

no code implementations25 Sep 2020 Shuqing Bian, Xu Chen, Wayne Xin Zhao, Kun Zhou, Yupeng Hou, Yang song, Tao Zhang, Ji-Rong Wen

Compared with pure text-based matching models, the proposed approach is able to learn better data representations from limited or even sparse interaction data, which is more resistible to noise in training data.

Text Matching

Leveraging Historical Interaction Data for Improving Conversational Recommender System

no code implementations19 Aug 2020 Kun Zhou, Wayne Xin Zhao, Hui Wang, Sirui Wang, Fuzheng Zhang, Zhongyuan Wang, Ji-Rong Wen

Most of the existing CRS methods focus on learning effective preference representations for users from conversation data alone.

Recommendation Systems

S^3-Rec: Self-Supervised Learning for Sequential Recommendation with Mutual Information Maximization

2 code implementations18 Aug 2020 Kun Zhou, Hui Wang, Wayne Xin Zhao, Yutao Zhu, Sirui Wang, Fuzheng Zhang, Zhongyuan Wang, Ji-Rong Wen

To tackle this problem, we propose the model S^3-Rec, which stands for Self-Supervised learning for Sequential Recommendation, based on the self-attentive neural architecture.

Self-Supervised Learning Sequential Recommendation

Variance Reduction for Deep Q-Learning using Stochastic Recursive Gradient

no code implementations25 Jul 2020 Haonan Jia, Xiao Zhang, Jun Xu, Wei Zeng, Hao Jiang, Xiaohui Yan, Ji-Rong Wen

Deep Q-learning algorithms often suffer from poor gradient estimations with an excessive variance, resulting in unstable training and poor sampling efficiency.

Q-Learning reinforcement-learning +1

Counterfactual VQA: A Cause-Effect Look at Language Bias

1 code implementation CVPR 2021 Yulei Niu, Kaihua Tang, Hanwang Zhang, Zhiwu Lu, Xian-Sheng Hua, Ji-Rong Wen

VQA models may tend to rely on language bias as a shortcut and thus fail to sufficiently learn the multi-modal knowledge from both vision and language.

Counterfactual Inference Question Answering +1

Domain-Adaptive Few-Shot Learning

1 code implementation19 Mar 2020 An Zhao, Mingyu Ding, Zhiwu Lu, Tao Xiang, Yulei Niu, Jiechao Guan, Ji-Rong Wen, Ping Luo

Existing few-shot learning (FSL) methods make the implicit assumption that the few target class samples are from the same domain as the source class samples.

Domain Adaptation Few-Shot Learning

AdarGCN: Adaptive Aggregation GCN for Few-Shot Learning

no code implementations28 Feb 2020 Jianhong Zhang, Manli Zhang, Zhiwu Lu, Tao Xiang, Ji-Rong Wen

To address this problem, we propose a graph convolutional network (GCN)-based label denoising (LDN) method to remove the irrelevant images.

Denoising Few-Shot Learning +1

Improving Multi-Turn Response Selection Models with Complementary Last-Utterance Selection by Instance Weighting

no code implementations18 Feb 2020 Kun Zhou, Wayne Xin Zhao, Yutao Zhu, Ji-Rong Wen, Jingsong Yu

Open-domain retrieval-based dialogue systems require a considerable amount of training data to learn their parameters.


Meta-Learning across Meta-Tasks for Few-Shot Learning

no code implementations11 Feb 2020 Nanyi Fei, Zhiwu Lu, Yizhao Gao, Jia Tian, Tao Xiang, Ji-Rong Wen

In this paper, we argue that the inter-meta-task relationships should be exploited and those tasks are sampled strategically to assist in meta-learning.

Domain Adaptation Few-Shot Learning +1

Few-Shot Learning as Domain Adaptation: Algorithm and Analysis

no code implementations6 Feb 2020 Jiechao Guan, Zhiwu Lu, Tao Xiang, Ji-Rong Wen

Specifically, armed with a set transformer based attention module, we construct each episode with two sub-episodes without class overlap on the seen classes to simulate the domain shift between the seen and unseen classes.

Domain Adaptation Few-Shot Image Classification +1

SetRank: Learning a Permutation-Invariant Ranking Model for Information Retrieval

2 code implementations12 Dec 2019 Liang Pang, Jun Xu, Qingyao Ai, Yanyan Lan, Xue-Qi Cheng, Ji-Rong Wen

In learning-to-rank for information retrieval, a ranking model is automatically learned from the data and then utilized to rank the sets of retrieved documents.

Information Retrieval Learning-To-Rank +1