no code implementations • Findings (EMNLP) 2021 • Yu Feng, Jing Zhang, Gaole He, Wayne Xin Zhao, Lemao Liu, Quan Liu, Cuiping Li, Hong Chen
Knowledge Base Question Answering (KBQA) is to answer natural language questions posed over knowledge bases (KBs).
no code implementations • 23 Sep 2023 • Zican Dong, Tianyi Tang, Junyi Li, Wayne Xin Zhao, Ji-Rong Wen
Recently, multiple studies have committed to extending the context length and enhancing the long text modeling capabilities of LLMs.
1 code implementation • 24 Aug 2023 • Jiawei Jiang, Chengkai Han, Wayne Xin Zhao, Jingyuan Wang
The field of urban spatial-temporal prediction is advancing rapidly with the development of deep learning techniques and the availability of large-scale datasets.
2 code implementations • 22 Aug 2023 • Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, ZhiYuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen
In this paper, we present a comprehensive survey of these studies, delivering a systematic review of the field of LLM-based autonomous agents from a holistic perspective.
no code implementations • 1 Aug 2023 • Geyang Guo, Jiarong Yang, Fengyuan LU, Jiaxin Qin, Tianyi Tang, Wayne Xin Zhao
From an evaluation perspective, we build a benchmark to judge ancient Chinese translation quality in different scenarios and evaluate the ancient Chinese translation capacities of various existing models.
no code implementations • 21 Jul 2023 • Zhipeng Zhao, Kun Zhou, Xiaolei Wang, Wayne Xin Zhao, Fan Pan, Zhao Cao, Ji-Rong Wen
Conversational recommender systems (CRS) aim to provide the recommendation service via natural language conversations.
1 code implementation • 20 Jul 2023 • Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang
In this study, we present an initial analysis of the factual knowledge boundaries of LLMs and how retrieval augmentation affects LLMs on open-domain QA.
1 code implementation • 16 Jul 2023 • Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen
Different from previous studies focused on overall performance, this work aims to investigate the impact of quantization on \emph{emergent abilities}, which are important characteristics that distinguish LLMs from small language models.
1 code implementation • 26 Jun 2023 • Bowen Zheng, Yupeng Hou, Wayne Xin Zhao, Yang song, HengShu Zhu
Existing RRS models mainly capture static user preferences, which have neglected the evolving user tastes and the dynamic matching relation between the two parties.
no code implementations • 19 Jun 2023 • Wayne Xin Zhao, Kun Zhou, Beichen Zhang, Zheng Gong, Zhipeng Chen, Yuanhang Zhou, Ji-Rong Wen, Jing Sha, Shijin Wang, Cong Liu, Guoping Hu
Specially, we construct a Mixture-of-Experts~(MoE) architecture for modeling mathematical text, so as to capture the common mathematical knowledge across tasks.
1 code implementation • 5 Jun 2023 • Xiaolei Wang, Kun Zhou, Xinyu Tang, Wayne Xin Zhao, Fan Pan, Zhao Cao, Ji-Rong Wen
To develop our approach, we characterize user preference and organize the conversation flow by the entities involved in the dialogue, and design a multi-stage recommendation dialogue simulator based on a conversation flow language model.
1 code implementation • 5 Jun 2023 • Lei Wang, Jingsen Zhang, Hao Yang, ZhiYuan Chen, Jiakai Tang, Zeyu Zhang, Xu Chen, Yankai Lin, Ruihua Song, Wayne Xin Zhao, Jun Xu, Zhicheng Dou, Jun Wang, Ji-Rong Wen
We argue that these models present significant opportunities for reliable user simulation, and have the potential to revolutionize traditional study paradigms in user behavior analysis.
1 code implementation • 4 Jun 2023 • Beichen Zhang, Kun Zhou, Xilin Wei, Wayne Xin Zhao, Jing Sha, Shijin Wang, Ji-Rong Wen
Based on this finding, we propose a new approach that can deliberate the reasoning steps with tool interfaces, namely \textbf{DELI}.
1 code implementation • 26 May 2023 • Yifan Du, Junyi Li, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen
In this paper, we propose a novel language model guided captioning approach, LAMOC, for knowledge-based visual question answering (VQA).
1 code implementation • 26 May 2023 • Tianyi Tang, Yushuo Chen, Yifan Du, Junyi Li, Wayne Xin Zhao, Ji-Rong Wen
People often imagine relevant scenes to aid in the writing process.
1 code implementation • 24 May 2023 • Tianyi Tang, Hongyuan Lu, Yuchen Eleanor Jiang, Haoyang Huang, Dongdong Zhang, Wayne Xin Zhao, Furu Wei
Most research about natural language generation (NLG) relies on evaluation benchmarks with limited references for a sample, which may result in poor correlations with human judgements.
1 code implementation • 23 May 2023 • Zhipeng Chen, Kun Zhou, Beichen Zhang, Zheng Gong, Wayne Xin Zhao, Ji-Rong Wen
To improve the reasoning abilities, we propose \textbf{ChatCoT}, a tool-augmented chain-of-thought reasoning framework for chat-based LLMs.
1 code implementation • 22 May 2023 • Xiaolei Wang, Xinyu Tang, Wayne Xin Zhao, Jingyuan Wang, Ji-Rong Wen
The recent success of large language models (LLMs) has shown great potential to develop more powerful conversational recommender systems (CRSs), which rely on natural language conversations to satisfy user needs.
2 code implementations • 19 May 2023 • Junyi Li, Xiaoxue Cheng, Wayne Xin Zhao, Jian-Yun Nie, Ji-Rong Wen
To understand what types of content and to which extent LLMs are apt to hallucinate, we introduce the Hallucination Evaluation for Large Language Models (HaluEval) benchmark, a large collection of generated and human-annotated hallucinated samples for evaluating the performance of LLMs in recognizing hallucination.
1 code implementation • 18 May 2023 • Junyi Li, Tianyi Tang, Wayne Xin Zhao, Jingyuan Wang, Jian-Yun Nie, Ji-Rong Wen
In order to further improve the capacity of LLMs for knowledge-intensive tasks, we consider augmenting LLMs with the large-scale web using search engine.
no code implementations • 18 May 2023 • Ruiyang Ren, Wayne Xin Zhao, Jing Liu, Hua Wu, Ji-Rong Wen, Haifeng Wang
Recently, model-based retrieval has emerged as a new paradigm in text retrieval that discards the index in the traditional retrieval model and instead memorizes the candidate corpora using model parameters.
2 code implementations • 17 May 2023 • YiFan Li, Yifan Du, Kun Zhou, Jinpeng Wang, Wayne Xin Zhao, Ji-Rong Wen
Despite the promising progress on LVLMs, we find that LVLMs suffer from the hallucination problem, i. e. they tend to generate objects that are inconsistent with the target images in the descriptions.
1 code implementation • 16 May 2023 • Jinhao Jiang, Kun Zhou, Zican Dong, Keming Ye, Wayne Xin Zhao, Ji-Rong Wen
Specially, we propose an \emph{invoking-linearization-generation} procedure to support LLMs in reasoning on the structured data with the help of the external interfaces.
1 code implementation • 15 May 2023 • Yupeng Hou, Junjie Zhang, Zihan Lin, Hongyu Lu, Ruobing Xie, Julian McAuley, Wayne Xin Zhao
Along this line of research, this work aims to investigate the capacity of LLMs that act as the ranking model for recommender systems.
no code implementations • 11 May 2023 • Haoyang Huang, Tianyi Tang, Dongdong Zhang, Wayne Xin Zhao, Ting Song, Yan Xia, Furu Wei
Large language models (LLMs) demonstrate impressive multilingual capability, but their performance varies substantially across different languages.
no code implementations • 11 May 2023 • Junjie Zhang, Ruobing Xie, Yupeng Hou, Wayne Xin Zhao, Leyu Lin, Ji-Rong Wen
Inspired by the recent progress on large language models (LLMs), we take a different approach to developing the recommendation models, considering recommendation as instruction following by LLMs.
no code implementations • 6 May 2023 • Kun Zhou, YiFan Li, Wayne Xin Zhao, Ji-Rong Wen
To solve it, we propose Diffusion-NAT, which introduces discrete diffusion models~(DDM) into NAR text-to-text generation and integrates BART to improve the performance.
1 code implementation • 4 May 2023 • Chenzhan Shang, Yupeng Hou, Wayne Xin Zhao, Yaliang Li, Jing Zhang
In our approach, we first employ the hypergraph structure to model users' historical dialogue sessions and form a session-based hypergraph, which captures coarse-grained, session-level relations.
2 code implementations • 27 Apr 2023 • Jiawei Jiang, Chengkai Han, Wenjun Jiang, Wayne Xin Zhao, Jingyuan Wang
As deep learning technology advances and more urban spatial-temporal data accumulates, an increasing number of deep learning models are being proposed to solve urban spatial-temporal prediction problems.
no code implementations • 25 Apr 2023 • Junyi Li, Wayne Xin Zhao, Jian-Yun Nie, Ji-Rong Wen
In this way, conditional text generation can be cast as a glyph image generation task, and it is then natural to apply continuous diffusion models to discrete texts.
1 code implementation • 21 Apr 2023 • Zhen Tian, Ting Bai, Wayne Xin Zhao, Ji-Rong Wen, Zhao Cao
EulerNet converts the exponential powers of feature interactions into simple linear combinations of the modulus and phase of the complex features, making it possible to adaptively learn the high-order feature interactions in an efficient way.
1 code implementation • 31 Mar 2023 • Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, YiFan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, Ji-Rong Wen
To discriminate the difference in parameter scale, the research community has coined the term large language models (LLM) for the PLMs of significant size.
no code implementations • 27 Mar 2023 • Peiyu Liu, Ze-Feng Gao, Yushuo Chen, Wayne Xin Zhao, Ji-Rong Wen
Based on such a decomposition, our architecture shares the central tensor across all layers for reducing the model size and meanwhile keeps layer-specific auxiliary tensors (also using adapters) for enhancing the adaptation flexibility.
1 code implementation • 12 Mar 2023 • YiFan Li, Kun Zhou, Wayne Xin Zhao, Ji-Rong Wen
In this survey, we review the recent progress in diffusion models for NAR text generation.
no code implementations • 28 Feb 2023 • Zican Dong, Tianyi Tang, Lunyi Li, Wayne Xin Zhao
In this paper, we provide an overview of the recent advances on long texts modeling based on Transformer models.
no code implementations • 6 Feb 2023 • Shanlei Mu, Penghui Wei, Wayne Xin Zhao, Shaoguo Liu, Liang Wang, Bo Zheng
In this paper, we propose a Hybrid Contrastive Constrained approach (HC^2) for multi-scenario ad ranking.
1 code implementation • 19 Jan 2023 • Jiawei Jiang, Chengkai Han, Wayne Xin Zhao, Jingyuan Wang
However, GNN-based models have three major limitations for traffic prediction: i) Most methods model spatial dependencies in a static manner, which limits the ability to learn dynamic urban traffic patterns; ii) Most methods only consider short-range spatial information and are unable to capture long-range spatial dependencies; iii) These methods ignore the fact that the propagation of traffic conditions between locations has a time delay in traffic systems.
Ranked #1 on
Traffic Prediction
on PeMSD8
no code implementations • 16 Jan 2023 • Wenjun Jiang, Wayne Xin Zhao, Jingyuan Wang, Jiawei Jiang
Simulating the human mobility and generating large-scale trajectories are of great use in many real-world applications, such as urban planning, epidemic spreading analysis, and geographic privacy protect.
1 code implementation • 14 Jan 2023 • Hongpeng Lin, Ludan Ruan, Wenke Xia, Peiyu Liu, Jingyuan Wen, Yixin Xu, Di Hu, Ruihua Song, Wayne Xin Zhao, Qin Jin, Zhiwu Lu
Experimental results indicate that the models incorporating large language models (LLM) can generate more diverse responses, while the model utilizing knowledge graphs to introduce external knowledge performs the best overall.
1 code implementation • 26 Dec 2022 • Tianyi Tang, Junyi Li, Zhipeng Chen, Yiwen Hu, Zhuohao Yu, Wenxun Dai, Zican Dong, Xiaoxue Cheng, Yuhao Wang, Wayne Xin Zhao, Jian-Yun Nie, Ji-Rong Wen
To facilitate research on text generation, this paper presents a comprehensive and unified library, TextBox 2. 0, focusing on the use of pre-trained language models (PLMs).
Ranked #1 on
Style Transfer
on GYAFC
1 code implementation • 15 Dec 2022 • Kun Zhou, Xiao Liu, Yeyun Gong, Wayne Xin Zhao, Daxin Jiang, Nan Duan, Ji-Rong Wen
Pre-trained Transformers (\eg BERT) have been commonly used in existing dense retrieval methods for parameter initialization, and recent studies are exploring more effective pre-training tasks for further improving the quality of dense vectors.
1 code implementation • 15 Dec 2022 • Hangyu Guo, Kun Zhou, Wayne Xin Zhao, Qinyu Zhang, Ji-Rong Wen
Although pre-trained language models~(PLMs) have shown impressive performance by text-only self-supervised training, they are found lack of visual semantics or commonsense.
1 code implementation • 2 Dec 2022 • Jinhao Jiang, Kun Zhou, Wayne Xin Zhao, Ji-Rong Wen
Multi-hop Question Answering over Knowledge Graph~(KGQA) aims to find the answer entities that are multiple hops away from the topic entities mentioned in a natural language question on a large-scale Knowledge Graph (KG).
1 code implementation • 28 Nov 2022 • Lanling Xu, Zhen Tian, Gaowei Zhang, Lei Wang, Junjie Zhang, Bowen Zheng, YiFan Li, Yupeng Hou, Xingyu Pan, Yushuo Chen, Wayne Xin Zhao, Xu Chen, Ji-Rong Wen
In order to show the recent update in RecBole, we write this technical report to introduce our latest improvements on RecBole.
2 code implementations • 27 Nov 2022 • Wayne Xin Zhao, Jing Liu, Ruiyang Ren, Ji-Rong Wen
With powerful PLMs, we can effectively learn the representations of queries and texts in the latent representation space, and further construct the semantic matching function between the dense vectors for relevance modeling.
1 code implementation • 21 Nov 2022 • Zhen Tian, Ting Bai, Zibin Zhang, Zhiyuan Xu, Kangyi Lin, Ji-Rong Wen, Wayne Xin Zhao
Some recent knowledge distillation based methods transfer knowledge from complex teacher models to shallow student models for accelerating the online model inference.
1 code implementation • 24 Oct 2022 • Junyi Li, Tianyi Tang, Wayne Xin Zhao, Jian-Yun Nie, Ji-Rong Wen
However, NAR models usually generate texts of lower quality due to the absence of token dependency in the output text.
1 code implementation • 22 Oct 2022 • Yupeng Hou, Zhankui He, Julian McAuley, Wayne Xin Zhao
Based on this representation scheme, we further propose an enhanced contrastive pre-training approach, using semi-synthetic and mixed-domain code representations as hard negatives.
1 code implementation • 21 Oct 2022 • Kun Zhou, Yeyun Gong, Xiao Liu, Wayne Xin Zhao, Yelong Shen, Anlei Dong, Jingwen Lu, Rangan Majumder, Ji-Rong Wen, Nan Duan, Weizhu Chen
Thus, we propose a simple ambiguous negatives sampling method, SimANS, which incorporates a new sampling probability distribution to sample more ambiguous negatives.
1 code implementation • 21 Oct 2022 • Yupeng Hou, Wayne Xin Zhao, Yaliang Li, Ji-Rong Wen
To develop effective and efficient graph similarity learning (GSL) models, a series of data-driven neural algorithms have been proposed in recent years.
no code implementations • 29 Aug 2022 • Zihan Lin, Xuanhua Yang, Xiaoyu Peng, Wayne Xin Zhao, Shaoguo Liu, Liang Wang, Bo Zheng
For this purpose, we build a relatedness prediction network, so that it can predict the contrast strength for inter-task representations of an instance.
1 code implementation • 18 Aug 2022 • Chen Yang, Yupeng Hou, Yang song, Tao Zhang, Ji-Rong Wen, Wayne Xin Zhao
To model the two-way selection preference from the dual-perspective of job seekers and employers, we incorporate two different nodes for each candidate (or job) and characterize both successful matching and failed matching via a unified dual-perspective interaction graph.
2 code implementations • 24 Jun 2022 • Tianyi Tang, Junyi Li, Wayne Xin Zhao, Ji-Rong Wen
Motivated by the success of supervised pre-training, we propose Multi-task superVised Pre-training (MVP) for natural language generation.
1 code implementation • 19 Jun 2022 • Xiaolei Wang, Kun Zhou, Ji-Rong Wen, Wayne Xin Zhao
Our approach unifies the recommendation and conversation subtasks into the prompt learning paradigm, and utilizes knowledge-enhanced prompts based on a fixed pre-trained language model (PLM) to fulfill both subtasks in a unified approach.
Ranked #1 on
Text Generation
on ReDial
2 code implementations • 15 Jun 2022 • Wayne Xin Zhao, Yupeng Hou, Xingyu Pan, Chen Yang, Zeyu Zhang, Zihan Lin, Jingsen Zhang, Shuqing Bian, Jiakai Tang, Wenqi Sun, Yushuo Chen, Lanling Xu, Gaowei Zhang, Zhen Tian, Changxin Tian, Shanlei Mu, Xinyan Fan, Xu Chen, Ji-Rong Wen
In order to support the study of recent advances in recommender systems, this paper presents an extended recommendation library consisting of eight packages for up-to-date topics and architectures.
1 code implementation • 13 Jun 2022 • Yupeng Hou, Shanlei Mu, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen
In order to develop effective sequential recommenders, a series of sequence representation learning (SRL) methods are proposed to model historical user behaviors.
1 code implementation • 13 Jun 2022 • Wayne Xin Zhao, Kun Zhou, Zheng Gong, Beichen Zhang, Yuanhang Zhou, Jing Sha, Zhigang Chen, Shijin Wang, Cong Liu, Ji-Rong Wen
Considering the complex nature of mathematical texts, we design a novel curriculum pre-training approach for improving the learning of mathematical PLMs, consisting of both basic and advanced courses.
no code implementations • 10 Jun 2022 • Zihan Lin, Hui Wang, Jingshu Mao, Wayne Xin Zhao, Cheng Wang, Peng Jiang, Ji-Rong Wen
Relevant recommendation is a special recommendation scenario which provides relevant items when users express interests on one target item (e. g., click, like and purchase).
no code implementations • 6 Jun 2022 • Shanlei Mu, Yupeng Hou, Wayne Xin Zhao, Yaliang Li, Bolin Ding
Instead of explicitly learning representations for item IDs, IDA-SR directly learns item representations from rich text information.
no code implementations • 1 Jun 2022 • Lanling Xu, Jianxun Lian, Wayne Xin Zhao, Ming Gong, Linjun Shou, Daxin Jiang, Xing Xie, Ji-Rong Wen
The learn-to-compare paradigm of contrastive representation learning (CRL), which compares positive samples with negative ones for representation learning, has achieved great success in a wide range of domains, including natural language processing, computer vision, information retrieval and graph learning.
1 code implementation • 22 May 2022 • Xinyan Fan, Jianxun Lian, Wayne Xin Zhao, Zheng Liu, Chaozhuo Li, Xing Xie
We first extract distribution patterns from the item candidates.
1 code implementation • 4 May 2022 • Jinhao Jiang, Kun Zhou, Wayne Xin Zhao, Ji-Rong Wen
Commonsense reasoning in natural language is a desired ability of artificial intelligent systems.
1 code implementation • NAACL 2022 • Junyi Li, Tianyi Tang, Zheng Gong, Lixin Yang, Zhuohao Yu, Zhipeng Chen, Jingyuan Wang, Wayne Xin Zhao, Ji-Rong Wen
In this paper, we present a large-scale empirical study on general language ability evaluation of PLMs (ElitePLM).
1 code implementation • NAACL 2022 • Junyi Li, Tianyi Tang, Jian-Yun Nie, Ji-Rong Wen, Wayne Xin Zhao
First, PTG learns a set of source prompts for various source generation tasks and then transfers these prompts as target prompts to perform target generation tasks.
1 code implementation • ACL 2022 • Kun Zhou, Beichen Zhang, Wayne Xin Zhao, Ji-Rong Wen
In DCLR, we design an instance weighting method to punish false negatives and generate noise-based negatives to guarantee the uniformity of the representation space.
no code implementations • 27 Apr 2022 • Ruiyang Ren, Yingqi Qu, Jing Liu, Wayne Xin Zhao, Qifei Wu, Yuchen Ding, Hua Wu, Haifeng Wang, Ji-Rong Wen
Recent years have witnessed the significant advance in dense retrieval (DR) based on powerful pre-trained language models (PLM).
1 code implementation • 23 Apr 2022 • Yupeng Hou, Binbin Hu, Zhiqiang Zhang, Wayne Xin Zhao
Session-based Recommendation (SBR) refers to the task of predicting the next item based on short-term user behaviors within an anonymous session.
no code implementations • 27 Mar 2022 • Yupeng Hou, Xingyu Pan, Wayne Xin Zhao, Shuqing Bian, Yang song, Tao Zhang, Ji-Rong Wen
As the core technique of online recruitment platforms, person-job fit can improve hiring efficiency by accurately matching job positions with qualified candidates.
no code implementations • 22 Mar 2022 • Sha Yuan, Shuai Zhao, Jiahong Leng, Zhao Xue, Hanyu Zhao, Peiyu Liu, Zheng Gong, Wayne Xin Zhao, Junyi Li, Jie Tang
The results show that WuDaoMM can be applied as an efficient dataset for VLPMs, especially for the model in text-to-image generation task.
1 code implementation • 3 Mar 2022 • Yupeng Hou, Binbin Hu, Wayne Xin Zhao, Zhiqiang Zhang, Jun Zhou, Ji-Rong Wen
In this way, we can learn adaptive representations for a given graph when paired with different graphs, and both node- and graph-level characteristics are naturally considered in a single pre-training task.
2 code implementations • COLING 2022 • Ze-Feng Gao, Peiyu Liu, Wayne Xin Zhao, Zhong-Yi Lu, Ji-Rong Wen
Recently, Mixture-of-Experts (short as MoE) architecture has achieved remarkable success in increasing the model capacity of large-scale language models.
1 code implementation • 28 Feb 2022 • Kun Zhou, Hui Yu, Wayne Xin Zhao, Ji-Rong Wen
Recently, deep neural networks such as RNN, CNN and Transformer have been applied in the task of sequential recommendation, which aims to capture the dynamic preference characteristics from logged user behavior data for accurate recommendation.
no code implementations • 18 Feb 2022 • Yifan Du, Zikang Liu, Junyi Li, Wayne Xin Zhao
In this paper, we review the recent progress in Vision-Language Pre-Trained Models (VL-PTMs).
1 code implementation • 13 Feb 2022 • Zihan Lin, Changxin Tian, Yupeng Hou, Wayne Xin Zhao
For the structural neighbors on the interaction graph, we develop a novel structure-contrastive objective that regards users (or items) and their structural neighbors as positive contrastive pairs.
1 code implementation • COLING 2022 • Tianyi Tang, Junyi Li, Wayne Xin Zhao, Ji-Rong Wen
Secondly, we use continuous inverse prompting to improve the process of natural language generation by modeling an inverse generation process from output to input, making the generated text more relevant to the inputs.
no code implementations • 14 Jan 2022 • Junyi Li, Tianyi Tang, Wayne Xin Zhao, Jian-Yun Nie, Ji-Rong Wen
We begin with introducing three key aspects of applying PLMs to text generation: 1) how to encode the input into representations preserving input semantics which can be fused into PLMs; 2) how to design an effective PLM to serve as the generation model; and 3) how to effectively optimize PLMs given the reference text and to ensure that the generated texts satisfy special text properties.
1 code implementation • 4 Jan 2022 • Yuanhang Zhou, Kun Zhou, Wayne Xin Zhao, Cheng Wang, Peng Jiang, He Hu
To implement this framework, we design both coarse-grained and fine-grained procedures for modeling user preference, where the former focuses on more general, coarse-grained semantic fusion and the latter focuses on more specific, fine-grained semantic fusion.
Ranked #1 on
Recommendation Systems
on ReDial
1 code implementation • International Conference on Advances in Geographic Information Systems 2021 • Jingyuan Wang, Jiawei Jiang, Wenjun Jiang, Chao Li, Wayne Xin Zhao
This paper presents LibCity, a unified, comprehensive, and extensible library for traffic prediction, which provides researchers with a credible experimental tool and a convenient development framework.
Multivariate Time Series Forecasting
Spatio-Temporal Forecasting
+2
1 code implementation • EMNLP 2021 • Ruiyang Ren, Yingqi Qu, Jing Liu, Wayne Xin Zhao, Qiaoqiao She, Hua Wu, Haifeng Wang, Ji-Rong Wen
In this paper, we propose a novel joint training approach for dense passage retrieval and passage re-ranking.
1 code implementation • EMNLP 2021 • Kun Zhou, Wayne Xin Zhao, Sirui Wang, Fuzheng Zhang, Wei Wu, Ji-Rong Wen
To solve this issue, various data augmentation techniques are proposed to improve the robustness of PLMs.
1 code implementation • 15 Aug 2021 • Yunshi Lan, Gaole He, Jinhao Jiang, Jing Jiang, Wayne Xin Zhao, Ji-Rong Wen
Knowledge base question answering (KBQA) aims to answer a question over a knowledge base (KB).
1 code implementation • Findings (ACL) 2021 • Ruiyang Ren, Shangwen Lv, Yingqi Qu, Jing Liu, Wayne Xin Zhao, Qiaoqiao She, Hua Wu, Haifeng Wang, Ji-Rong Wen
Recently, dense passage retrieval has become a mainstream approach to finding relevant information in various natural language processing tasks.
no code implementations • 14 Jun 2021 • Xu Han, Zhengyan Zhang, Ning Ding, Yuxian Gu, Xiao Liu, Yuqi Huo, Jiezhong Qiu, Yuan YAO, Ao Zhang, Liang Zhang, Wentao Han, Minlie Huang, Qin Jin, Yanyan Lan, Yang Liu, Zhiyuan Liu, Zhiwu Lu, Xipeng Qiu, Ruihua Song, Jie Tang, Ji-Rong Wen, Jinhui Yuan, Wayne Xin Zhao, Jun Zhu
Large-scale pre-trained models (PTMs) such as BERT and GPT have recently achieved great success and become a milestone in the field of artificial intelligence (AI).
no code implementations • 12 Jun 2021 • Hui Wang, Kun Zhou, Wayne Xin Zhao, Jingyuan Wang, Ji-Rong Wen
Due to the flexibility in modelling data heterogeneity, heterogeneous information network (HIN) has been adopted to characterize complex and heterogeneous auxiliary data in top-$N$ recommender systems, called \emph{HIN-based recommendation}.
1 code implementation • ACL 2021 • Peiyu Liu, Ze-Feng Gao, Wayne Xin Zhao, Z. Y. Xie, Zhong-Yi Lu, Ji-Rong Wen
This paper presents a novel pre-trained language models (PLM) compression approach based on the matrix product operator (short as MPO) from quantum many-body physics.
1 code implementation • Findings (ACL) 2021 • Junyi Li, Tianyi Tang, Wayne Xin Zhao, Zhicheng Wei, Nicholas Jing Yuan, Ji-Rong Wen
This paper studies how to automatically generate a natural language text that describes the facts in knowledge graph (KG).
no code implementations • 25 May 2021 • Yunshi Lan, Gaole He, Jinhao Jiang, Jing Jiang, Wayne Xin Zhao, Ji-Rong Wen
In this paper, we elaborately summarize the typical challenges and solutions for complex KBQA.
no code implementations • 21 May 2021 • Junyi Li, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen
In this paper, we present an overview of the major advances achieved in the topic of PLMs for text generation.
no code implementations • 9 May 2021 • Junyi Li, Wayne Xin Zhao, Zhicheng Wei, Nicholas Jing Yuan, Ji-Rong Wen
For global coherence, we design a hierarchical self-attentive architecture with both subgraph- and node-level attention to enhance the correlations between subgraphs.
2 code implementations • 11 Mar 2021 • Yuqi Huo, Manli Zhang, Guangzhen Liu, Haoyu Lu, Yizhao Gao, Guoxing Yang, Jingyuan Wen, Heng Zhang, Baogui Xu, Weihao Zheng, Zongzheng Xi, Yueqian Yang, Anwen Hu, Jinming Zhao, Ruichen Li, Yida Zhao, Liang Zhang, Yuqing Song, Xin Hong, Wanqing Cui, Danyang Hou, Yingyan Li, Junyi Li, Peiyu Liu, Zheng Gong, Chuhao Jin, Yuchong Sun, ShiZhe Chen, Zhiwu Lu, Zhicheng Dou, Qin Jin, Yanyan Lan, Wayne Xin Zhao, Ruihua Song, Ji-Rong Wen
We further construct a large Chinese multi-source image-text dataset called RUC-CAS-WenLan for pre-training our BriVL model.
Ranked #1 on
Image Retrieval
on RUC-CAS-WenLan
1 code implementation • 11 Jan 2021 • Gaole He, Yunshi Lan, Jing Jiang, Wayne Xin Zhao, Ji-Rong Wen
In our approach, the student network aims to find the correct answer to the query, while the teacher network tries to learn intermediate supervision signals for improving the reasoning capacity of the student network.
Ranked #2 on
Semantic Parsing
on WebQuestionsSP
1 code implementation • ACL 2021 • Junyi Li, Tianyi Tang, Gaole He, Jinhao Jiang, Xiaoxuan Hu, Puzhao Xie, Zhipeng Chen, Zhuohao Yu, Wayne Xin Zhao, Ji-Rong Wen
In this paper, we release an open-source library, called TextBox, to provide a unified, modularized, and extensible text generation framework.
1 code implementation • ACL 2021 • Kun Zhou, Xiaolei Wang, Yuanhang Zhou, Chenzhan Shang, Yuan Cheng, Wayne Xin Zhao, Yaliang Li, Ji-Rong Wen
In recent years, conversational recommender system (CRS) has received much attention in the research community.
1 code implementation • 3 Nov 2020 • Wayne Xin Zhao, Shanlei Mu, Yupeng Hou, Zihan Lin, Yushuo Chen, Xingyu Pan, Kaiyuan Li, Yujie Lu, Hui Wang, Changxin Tian, Yingqian Min, Zhichao Feng, Xinyan Fan, Xu Chen, Pengfei Wang, Wendi Ji, Yaliang Li, Xiaoling Wang, Ji-Rong Wen
In this library, we implement 73 recommendation models on 28 benchmark datasets, covering the categories of general recommendation, sequential recommendation, context-aware recommendation and knowledge-based recommendation.
1 code implementation • NAACL 2021 • Yingqi Qu, Yuchen Ding, Jing Liu, Kai Liu, Ruiyang Ren, Wayne Xin Zhao, daxiang dong, Hua Wu, Haifeng Wang
In open-domain question answering, dense passage retrieval has become a new paradigm to retrieve relevant passages for finding answers.
Ranked #4 on
Passage Retrieval
on Natural Questions
no code implementations • 9 Oct 2020 • Wayne Xin Zhao, Junhua Chen, Pengfei Wang, Qi Gu, Ji-Rong Wen
Top-N item recommendation has been a widely studied task from implicit feedback.
2 code implementations • COLING 2020 • Kun Zhou, Yuanhang Zhou, Wayne Xin Zhao, Xiaoke Wang, Ji-Rong Wen
To develop an effective CRS, the support of high-quality datasets is essential.
1 code implementation • 4 Oct 2020 • Junyi Li, Siqing Li, Wayne Xin Zhao, Gaole He, Zhicheng Wei, Nicholas Jing Yuan, Ji-Rong Wen
First, based on graph capsules, we adaptively learn aspect capsules for inferring the aspect sequence.
no code implementations • 25 Sep 2020 • Shuqing Bian, Xu Chen, Wayne Xin Zhao, Kun Zhou, Yupeng Hou, Yang song, Tao Zhang, Ji-Rong Wen
Compared with pure text-based matching models, the proposed approach is able to learn better data representations from limited or even sparse interaction data, which is more resistible to noise in training data.
no code implementations • 19 Aug 2020 • Kun Zhou, Wayne Xin Zhao, Hui Wang, Sirui Wang, Fuzheng Zhang, Zhongyuan Wang, Ji-Rong Wen
Most of the existing CRS methods focus on learning effective preference representations for users from conversation data alone.
2 code implementations • 18 Aug 2020 • Kun Zhou, Hui Wang, Wayne Xin Zhao, Yutao Zhu, Sirui Wang, Fuzheng Zhang, Zhongyuan Wang, Ji-Rong Wen
To tackle this problem, we propose the model S^3-Rec, which stands for Self-Supervised learning for Sequential Recommendation, based on the self-attentive neural architecture.
2 code implementations • 8 Jul 2020 • Kun Zhou, Wayne Xin Zhao, Shuqing Bian, Yuanhang Zhou, Ji-Rong Wen, Jingsong Yu
Conversational recommender systems (CRS) aim to recommend high-quality items to users through interactive conversations.
Ranked #3 on
Text Generation
on ReDial
2 code implementations • 21 May 2020 • Ruiyang Ren, Zhao-Yang Liu, Yaliang Li, Wayne Xin Zhao, Hui Wang, Bolin Ding, Ji-Rong Wen
Recently, deep learning has made significant progress in the task of sequential recommendation.
4 code implementations • 28 Mar 2020 • Gaole He, Junyi Li, Wayne Xin Zhao, Peiju Liu, Ji-Rong Wen
Our generator is isolated from user interaction data, and serves to improve the performance of the discriminator.
no code implementations • 18 Feb 2020 • Kun Zhou, Wayne Xin Zhao, Yutao Zhu, Ji-Rong Wen, Jingsong Yu
Open-domain retrieval-based dialogue systems require a considerable amount of training data to learn their parameters.
no code implementations • IJCNLP 2019 • Siqing Li, Wayne Xin Zhao, Eddy Jing Yin, Ji-Rong Wen
An important kind of data signals, peer review text, has not been utilized for the CCP task.
no code implementations • IJCNLP 2019 • Shuqing Bian, Wayne Xin Zhao, Yang song, Tao Zhang, Ji-Rong Wen
Furthermore, we extend the match network and implement domain adaptation in three levels, sentence-level representation, sentence-level match, and global match.
no code implementations • 19 Jul 2019 • Jingyuan Wang, Ning Wu, Wayne Xin Zhao, Fanzhang Peng, Xin Lin
To address these issues, we propose using neural networks to automatically learn the cost functions of a classic heuristic algorithm, namely A* algorithm, for the PRR task.
1 code implementation • ACL 2019 • Junyi Li, Wayne Xin Zhao, Ji-Rong Wen, Yang song
In this paper, we propose a novel review generation model by characterizing an elaborately designed aspect-aware coarse-to-fine generation process.
no code implementations • 12 Feb 2019 • Ting Bai, Pan Du, Wayne Xin Zhao, Ji-Rong Wen, Jian-Yun Nie
Recommending the right products is the central problem in recommender systems, but the right products should also be recommended at the right time to meet the demands of users, so as to maximize their values.
1 code implementation • 30 Jul 2018 • Wayne Xin Zhao, Gaole He, Hongjian Dou, Jin Huang, Siqi Ouyang, Ji-Rong Wen
Based on our linked dataset, we first preform some interesting qualitative analysis experiments, in which we discuss the effect of two important factors (i. e. popularity and recency) on whether a RS item can be linked to a KB entity.
2 code implementations • ACL 2018 • Xiangyang Zhou, Lu Li, daxiang dong, Yi Liu, Ying Chen, Wayne Xin Zhao, dianhai yu, Hua Wu
Human generates responses relying on semantic and functional dependencies, including coreference relation, among dialogue elements and their context.
Ranked #6 on
Conversational Response Selection
on RRS
1 code implementation • ACL 2018 • Jialong Han, Yan Song, Wayne Xin Zhao, Shuming Shi, Haisong Zhang
Hypertext documents, such as web pages and academic papers, are of great importance in delivering information in our daily life.
1 code implementation • 29 Nov 2017 • Chuan Shi, Binbin Hu, Wayne Xin Zhao, Philip S. Yu
In this paper, we propose a novel heterogeneous network embedding based approach for HIN based recommendation, called HERec.
Social and Information Networks
1 code implementation • 6 Feb 2015 • Wayne Xin Zhao, Xu-Dong Zhang, Daniel Lemire, Dongdong Shan, Jian-Yun Nie, Hongfei Yan, Ji-Rong Wen
Compression algorithms are important for data oriented tasks, especially in the era of Big Data.