Search Results for author: Daxin Jiang

Found 129 papers, 58 papers with code

Social Norms-Grounded Machine Ethics in Complex Narrative Situation

no code implementations COLING 2022 Tao Shen, Xiubo Geng, Daxin Jiang

Besides a norm-grounding knowledge model, we present a novel norm-supported ethical judgment model in line with neural module networks to alleviate dilemma situations and improve norm-level explainability.

Cultural Vocal Bursts Intensity Prediction Ethics

Hypertext Entity Extraction in Webpage

no code implementations4 Mar 2024 Yifei Yang, Tianqiao Liu, Bo Shao, Hai Zhao, Linjun Shou, Ming Gong, Daxin Jiang

Webpage entity extraction is a fundamental natural language processing task in both research and applications.

Instructed Language Models with Retrievers Are Powerful Entity Linkers

1 code implementation6 Nov 2023 Zilin Xiao, Ming Gong, Jie Wu, Xingyao Zhang, Linjun Shou, Jian Pei, Daxin Jiang

Generative approaches powered by large language models (LLMs) have demonstrated emergent abilities in tasks that require complex reasoning abilities.

Entity Linking In-Context Learning

Coherent Entity Disambiguation via Modeling Topic and Categorical Dependency

no code implementations6 Nov 2023 Zilin Xiao, Linjun Shou, Xingyao Zhang, Jie Wu, Ming Gong, Jian Pei, Daxin Jiang

We propose CoherentED, an ED system equipped with novel designs aimed at enhancing the coherence of entity predictions.

Entity Disambiguation

RUEL: Retrieval-Augmented User Representation with Edge Browser Logs for Sequential Recommendation

no code implementations19 Sep 2023 Ning Wu, Ming Gong, Linjun Shou, Jian Pei, Daxin Jiang

RUEL is the first method that connects user browsing data with typical recommendation datasets and can be generalized to various recommendation scenarios and datasets.

Contrastive Learning Retrieval +3

Investigating the Learning Behaviour of In-context Learning: A Comparison with Supervised Learning

1 code implementation28 Jul 2023 Xindi Wang, YuFei Wang, Can Xu, Xiubo Geng, BoWen Zhang, Chongyang Tao, Frank Rudzicz, Robert E. Mercer, Daxin Jiang

Large language models (LLMs) have shown remarkable capacity for in-context learning (ICL), where learning a new task from just a few training examples is done without being explicitly pre-trained.

In-Context Learning

WizardCoder: Empowering Code Large Language Models with Evol-Instruct

3 code implementations14 Jun 2023 Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, QIngwei Lin, Daxin Jiang

Moreover, our model even outperforms the largest closed LLMs, Anthropic's Claude and Google's Bard, on HumanEval and HumanEval+.

Code Generation HumanEval

Allies: Prompting Large Language Model with Beam Search

1 code implementation24 May 2023 Hao Sun, Xiao Liu, Yeyun Gong, Yan Zhang, Daxin Jiang, Linjun Yang, Nan Duan

With the advance of large language models (LLMs), the research field of LLM applications becomes more and more popular and the idea of constructing pipelines to accomplish complex tasks by stacking LLM API calls come true.

Language Modelling Large Language Model +3

Synergistic Interplay between Search and Large Language Models for Information Retrieval

2 code implementations12 May 2023 Jiazhan Feng, Chongyang Tao, Xiubo Geng, Tao Shen, Can Xu, Guodong Long, Dongyan Zhao, Daxin Jiang

Information retrieval (IR) plays a crucial role in locating relevant resources from vast amounts of data, and its applications have evolved from traditional knowledge bases to modern retrieval models (RMs).

Information Retrieval Retrieval

Alleviating Over-smoothing for Unsupervised Sentence Representation

1 code implementation9 May 2023 Nuo Chen, Linjun Shou, Ming Gong, Jian Pei, Bowen Cao, Jianhui Chang, Daxin Jiang, Jia Li

Currently, learning better unsupervised sentence representations is the pursuit of many natural language processing communities.

Contrastive Learning Semantic Textual Similarity +1

Code Execution with Pre-trained Language Models

1 code implementation8 May 2023 Chenxiao Liu, Shuai Lu, Weizhu Chen, Daxin Jiang, Alexey Svyatkovskiy, Shengyu Fu, Neel Sundaresan, Nan Duan

Code execution is a fundamental aspect of programming language semantics that reflects the exact behavior of the code.

Code Generation Code Search +2

Augmented Large Language Models with Parametric Knowledge Guiding

1 code implementation8 May 2023 Ziyang Luo, Can Xu, Pu Zhao, Xiubo Geng, Chongyang Tao, Jing Ma, QIngwei Lin, Daxin Jiang

We demonstrate that our PKG framework can enhance the performance of "black-box" LLMs on a range of domain knowledge-intensive tasks that require factual (+7. 9%), tabular (+11. 9%), medical (+3. 0%), and multimodal (+8. 1%) knowledge.

Large Language Models are Strong Zero-Shot Retriever

no code implementations27 Apr 2023 Tao Shen, Guodong Long, Xiubo Geng, Chongyang Tao, Tianyi Zhou, Daxin Jiang

In this work, we propose a simple method that applies a large language model (LLM) to large-scale retrieval in zero-shot scenarios.

Language Modelling Large Language Model +1

WizardLM: Empowering Large Language Models to Follow Complex Instructions

4 code implementations24 Apr 2023 Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, Daxin Jiang

In this paper, we show an avenue for creating large amounts of instruction data with varying levels of complexity using LLM instead of humans.

Instruction Following

Typos-aware Bottlenecked Pre-Training for Robust Dense Retrieval

1 code implementation17 Apr 2023 Shengyao Zhuang, Linjun Shou, Jian Pei, Ming Gong, Houxing Ren, Guido Zuccon, Daxin Jiang

To address this challenge, we propose ToRoDer (TypOs-aware bottlenecked pre-training for RObust DEnse Retrieval), a novel re-training strategy for DRs that increases their robustness to misspelled queries while preserving their effectiveness in downstream retrieval tasks.

Decoder Language Modelling +1

Inference with Reference: Lossless Acceleration of Large Language Models

2 code implementations10 Apr 2023 Nan Yang, Tao Ge, Liang Wang, Binxing Jiao, Daxin Jiang, Linjun Yang, Rangan Majumder, Furu Wei

We propose LLMA, an LLM accelerator to losslessly speed up Large Language Model (LLM) inference with references.

Decoder Language Modelling +1

Large Language Models are Diverse Role-Players for Summarization Evaluation

no code implementations27 Mar 2023 Ning Wu, Ming Gong, Linjun Shou, Shining Liang, Daxin Jiang

First, we propose to model objective and subjective dimensions of generated text based on roleplayers prompting mechanism.

Informativeness Text Summarization

Lexicon-Enhanced Self-Supervised Training for Multilingual Dense Retrieval

no code implementations27 Mar 2023 Houxing Ren, Linjun Shou, Jian Pei, Ning Wu, Ming Gong, Daxin Jiang

In this paper, we propose to mine and generate self-supervised training data based on a large-scale unlabeled corpus.

Retrieval

Empowering Dual-Encoder with Query Generator for Cross-Lingual Dense Retrieval

no code implementations27 Mar 2023 Houxing Ren, Linjun Shou, Ning Wu, Ming Gong, Daxin Jiang

However, we find that the performance of the cross-encoder re-ranker is heavily influenced by the number of training samples and the quality of negative samples, which is hard to obtain in the cross-lingual setting.

Knowledge Distillation Retrieval

Bridge the Gap between Language models and Tabular Understanding

no code implementations16 Feb 2023 Nuo Chen, Linjun Shou, Ming Gong, Jian Pei, Chenyu You, Jianhui Chang, Daxin Jiang, Jia Li

For instance, TPLMs jointly pre-trained with table and text input could be effective for tasks also with table-text joint input like table question answering, but it may fail for tasks with only tables or text as input such as table retrieval.

Contrastive Learning Language Modelling +2

LexLIP: Lexicon-Bottlenecked Language-Image Pre-Training for Large-Scale Image-Text Retrieval

1 code implementation6 Feb 2023 Ziyang Luo, Pu Zhao, Can Xu, Xiubo Geng, Tao Shen, Chongyang Tao, Jing Ma, Qingwen Lin, Daxin Jiang

The conventional dense retrieval paradigm relies on encoding images and texts into dense representations using dual-stream encoders, however, it faces challenges with low retrieval speed in large-scale retrieval scenarios.

Image-text Retrieval Text Retrieval

Modeling Sequential Sentence Relation to Improve Cross-lingual Dense Retrieval

1 code implementation3 Feb 2023 Shunyu Zhang, Yaobo Liang, Ming Gong, Daxin Jiang, Nan Duan

Specifically, we propose a multilingual PLM called masked sentence model (MSM), which consists of a sentence encoder to generate the sentence representations, and a document encoder applied to a sequence of sentence vectors from a document.

Relation Representation Learning +3

Iterative Proposal Refinement for Weakly-Supervised Video Grounding

no code implementations CVPR 2023 Meng Cao, Fangyun Wei, Can Xu, Xiubo Geng, Long Chen, Can Zhang, Yuexian Zou, Tao Shen, Daxin Jiang

Weakly-Supervised Video Grounding (WSVG) aims to localize events of interest in untrimmed videos with only video-level annotations.

Sentence Video Grounding

Fine-Grained Distillation for Long Document Retrieval

no code implementations20 Dec 2022 Yucheng Zhou, Tao Shen, Xiubo Geng, Chongyang Tao, Guodong Long, Can Xu, Daxin Jiang

Long document retrieval aims to fetch query-relevant documents from a large-scale collection, where knowledge distillation has become de facto to improve a retriever by mimicking a heterogeneous yet powerful cross-encoder.

Knowledge Distillation Retrieval

Adam: Dense Retrieval Distillation with Adaptive Dark Examples

no code implementations20 Dec 2022 Chongyang Tao, Chang Liu, Tao Shen, Can Xu, Xiubo Geng, Binxing Jiao, Daxin Jiang

Different from previous works that only rely on one positive and hard negatives as candidate passages, we create dark examples that all have moderate relevance to the query through mixing-up and masking in discrete space.

Knowledge Distillation Retrieval

MASTER: Multi-task Pre-trained Bottlenecked Masked Autoencoders are Better Dense Retrievers

1 code implementation15 Dec 2022 Kun Zhou, Xiao Liu, Yeyun Gong, Wayne Xin Zhao, Daxin Jiang, Nan Duan, Ji-Rong Wen

Pre-trained Transformers (\eg BERT) have been commonly used in existing dense retrieval methods for parameter initialization, and recent studies are exploring more effective pre-training tasks for further improving the quality of dense vectors.

Decoder Passage Retrieval +1

VATLM: Visual-Audio-Text Pre-Training with Unified Masked Prediction for Speech Representation Learning

no code implementations21 Nov 2022 Qiushi Zhu, Long Zhou, Ziqiang Zhang, Shujie Liu, Binxing Jiao, Jie Zhang, LiRong Dai, Daxin Jiang, Jinyu Li, Furu Wei

Although speech is a simple and effective way for humans to communicate with the outside world, a more realistic speech interaction contains multimodal information, e. g., vision, text.

Audio-Visual Speech Recognition Language Modelling +4

Soft-Labeled Contrastive Pre-training for Function-level Code Representation

1 code implementation18 Oct 2022 Xiaonan Li, Daya Guo, Yeyun Gong, Yun Lin, Yelong Shen, Xipeng Qiu, Daxin Jiang, Weizhu Chen, Nan Duan

In this paper, we present \textbf{SCodeR}, a \textbf{S}oft-labeled contrastive pre-training framework with two positive sample construction methods to learn functional-level \textbf{Code} \textbf{R}epresentation.

Mixed-modality Representation Learning and Pre-training for Joint Table-and-Text Retrieval in OpenQA

1 code implementation11 Oct 2022 JunJie Huang, Wanjun Zhong, Qian Liu, Ming Gong, Daxin Jiang, Nan Duan

However, training an effective dense table-text retriever is difficult due to the challenges of table-text discrepancy and data sparsity problem.

Open-Domain Question Answering Representation Learning +1

PROD: Progressive Distillation for Dense Retrieval

1 code implementation27 Sep 2022 Zhenghao Lin, Yeyun Gong, Xiao Liu, Hang Zhang, Chen Lin, Anlei Dong, Jian Jiao, Jingwen Lu, Daxin Jiang, Rangan Majumder, Nan Duan

It is common that a better teacher model results in a bad student via distillation due to the nonnegligible gap between teacher and student.

Knowledge Distillation Natural Questions +1

LexMAE: Lexicon-Bottlenecked Pretraining for Large-Scale Retrieval

1 code implementation31 Aug 2022 Tao Shen, Xiubo Geng, Chongyang Tao, Can Xu, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang

In large-scale retrieval, the lexicon-weighting paradigm, learning weighted sparse representations in vocabulary space, has shown promising results with high quality and low latency.

Decoder Language Modelling +2

LED: Lexicon-Enlightened Dense Retriever for Large-Scale Retrieval

2 code implementations29 Aug 2022 Kai Zhang, Chongyang Tao, Tao Shen, Can Xu, Xiubo Geng, Binxing Jiao, Daxin Jiang

The alignment is achieved by weakened knowledge distillations to enlighten the retriever via two aspects -- 1) a lexicon-augmented contrastive objective to challenge the dense encoder and 2) a pair-wise rank-consistent regularization to make dense model's behavior incline to the other.

Representation Learning Retrieval

SimLM: Pre-training with Representation Bottleneck for Dense Passage Retrieval

1 code implementation6 Jul 2022 Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei

It employs a simple bottleneck architecture that learns to compress the passage information into a dense vector through self-supervised pre-training.

Language Modelling Passage Retrieval +1

Bridging the Gap Between Indexing and Retrieval for Differentiable Search Index with Query Generation

1 code implementation21 Jun 2022 Shengyao Zhuang, Houxing Ren, Linjun Shou, Jian Pei, Ming Gong, Guido Zuccon, Daxin Jiang

This problem is further exacerbated when using DSI for cross-lingual retrieval, where document text and query text are in different languages.

Passage Retrieval Retrieval

Towards Robust Ranker for Text Retrieval

no code implementations16 Jun 2022 Yucheng Zhou, Tao Shen, Xiubo Geng, Chongyang Tao, Can Xu, Guodong Long, Binxing Jiao, Daxin Jiang

A ranker plays an indispensable role in the de facto 'retrieval & rerank' pipeline, but its training still lags behind -- learning from moderate negatives or/and serving as an auxiliary module for a retriever.

Passage Retrieval Text Retrieval

Unsupervised Context Aware Sentence Representation Pretraining for Multi-lingual Dense Retrieval

1 code implementation7 Jun 2022 Ning Wu, Yaobo Liang, Houxing Ren, Linjun Shou, Nan Duan, Ming Gong, Daxin Jiang

On the multilingual sentence retrieval task Tatoeba, our model achieves new SOTA results among methods without using bilingual data.

Language Modelling Passage Retrieval +5

Task-Specific Expert Pruning for Sparse Mixture-of-Experts

no code implementations1 Jun 2022 Tianyu Chen, Shaohan Huang, Yuan Xie, Binxing Jiao, Daxin Jiang, Haoyi Zhou, JianXin Li, Furu Wei

The sparse Mixture-of-Experts (MoE) model is powerful for large-scale pre-training and has achieved promising results due to its model capacity.

THE-X: Privacy-Preserving Transformer Inference with Homomorphic Encryption

no code implementations Findings (ACL) 2022 Tianyu Chen, Hangbo Bao, Shaohan Huang, Li Dong, Binxing Jiao, Daxin Jiang, Haoyi Zhou, JianXin Li, Furu Wei

As more and more pre-trained language models adopt on-cloud deployment, the privacy issues grow quickly, mainly for the exposure of plain-text user data (e. g., search history, medical record, bank account).

Privacy Preserving

Negative Sampling for Contrastive Representation Learning: A Review

no code implementations1 Jun 2022 Lanling Xu, Jianxun Lian, Wayne Xin Zhao, Ming Gong, Linjun Shou, Daxin Jiang, Xing Xie, Ji-Rong Wen

The learn-to-compare paradigm of contrastive representation learning (CRL), which compares positive samples with negative ones for representation learning, has achieved great success in a wide range of domains, including natural language processing, computer vision, information retrieval and graph learning.

Graph Learning Information Retrieval +2

Label-aware Multi-level Contrastive Learning for Cross-lingual Spoken Language Understanding

no code implementations7 May 2022 Shining Liang, Linjun Shou, Jian Pei, Ming Gong, Wanli Zuo, Xianglin Zuo, Daxin Jiang

Despite the great success of spoken language understanding (SLU) in high-resource languages, it remains challenging in low-resource languages mainly due to the lack of labeled training data.

Contrastive Learning Spoken Language Understanding +1

Bridging the Gap between Language Models and Cross-Lingual Sequence Labeling

no code implementations NAACL 2022 Nuo Chen, Linjun Shou, Ming Gong, Jian Pei, Daxin Jiang

Large-scale cross-lingual pre-trained language models (xPLMs) have shown effectiveness in cross-lingual sequence labeling tasks (xSL), such as cross-lingual machine reading comprehension (xMRC) by transferring knowledge from a high-resource language to low-resource languages.

Contrastive Learning Language Modelling +1

Transformer-Empowered Content-Aware Collaborative Filtering

no code implementations2 Apr 2022 Weizhe Lin, Linjun Shou, Ming Gong, Pei Jian, Zhilin Wang, Bill Byrne, Daxin Jiang

Knowledge graph (KG) based Collaborative Filtering is an effective approach to personalizing recommendation systems for relatively static domains such as movies and books, by leveraging structured information from KG to enrich both item and user representations.

Collaborative Filtering Contrastive Learning +1

Multi-View Document Representation Learning for Open-Domain Dense Retrieval

no code implementations ACL 2022 Shunyu Zhang, Yaobo Liang, Ming Gong, Daxin Jiang, Nan Duan

Second, to prevent multi-view embeddings from collapsing to the same one, we further propose a global-local loss with annealed temperature to encourage the multiple viewers to better align with different potential queries.

Representation Learning Retrieval

HeterMPC: A Heterogeneous Graph Neural Network for Response Generation in Multi-Party Conversations

1 code implementation ACL 2022 Jia-Chen Gu, Chao-Hong Tan, Chongyang Tao, Zhen-Hua Ling, Huang Hu, Xiubo Geng, Daxin Jiang

To address these challenges, we present HeterMPC, a heterogeneous graph-based neural network for response generation in MPCs which models the semantics of utterances and interlocutors simultaneously with two types of nodes in a graph.

Graph Neural Network Response Generation

NÜWA-LIP: Language Guided Image Inpainting with Defect-free VQGAN

no code implementations10 Feb 2022 Minheng Ni, Chenfei Wu, Haoyang Huang, Daxin Jiang, WangMeng Zuo, Nan Duan

Language guided image inpainting aims to fill in the defective regions of an image under the guidance of text while keeping non-defective regions unchanged.

Image Inpainting

PCL: Peer-Contrastive Learning with Diverse Augmentations for Unsupervised Sentence Embeddings

1 code implementation28 Jan 2022 Qiyu Wu, Chongyang Tao, Tao Shen, Can Xu, Xiubo Geng, Daxin Jiang

A straightforward solution is resorting to more diverse positives from a multi-augmenting strategy, while an open question remains about how to unsupervisedly learn from the diverse positives but with uneven augmenting qualities in the text field.

Contrastive Learning Open-Ended Question Answering +3

From Good to Best: Two-Stage Training for Cross-lingual Machine Reading Comprehension

no code implementations9 Dec 2021 Nuo Chen, Linjun Shou, Min Gong, Jian Pei, Daxin Jiang

Cross-lingual Machine Reading Comprehension (xMRC) is challenging due to the lack of training data in low-resource languages.

Contrastive Learning Machine Reading Comprehension

NÜWA: Visual Synthesis Pre-training for Neural visUal World creAtion

1 code implementation24 Nov 2021 Chenfei Wu, Jian Liang, Lei Ji, Fan Yang, Yuejian Fang, Daxin Jiang, Nan Duan

To cover language, image, and video at the same time for different scenarios, a 3D transformer encoder-decoder framework is designed, which can not only deal with videos as 3D data but also adapt to texts and images as 1D and 2D data, respectively.

Decoder Text-to-Image Generation +3

Multimodal Dialogue Response Generation

no code implementations ACL 2022 Qingfeng Sun, Yujing Wang, Can Xu, Kai Zheng, Yaming Yang, Huang Hu, Fei Xu, Jessica Zhang, Xiubo Geng, Daxin Jiang

In such a low-resource setting, we devise a novel conversational agent, Divter, in order to isolate parameters that depend on multimodal dialogues from the entire generation model.

Dialogue Generation Response Generation +1

RecInDial: A Unified Framework for Conversational Recommendation with Pretrained Language Models

no code implementations14 Oct 2021 Lingzhi Wang, Huang Hu, Lei Sha, Can Xu, Kam-Fai Wong, Daxin Jiang

Furthermore, we propose to evaluate the CRS models in an end-to-end manner, which can reflect the overall performance of the entire system rather than the performance of individual modules, compared to the separate evaluations of the two modules used in previous work.

Conversational Recommendation Dialogue Generation +2

EventBERT: A Pre-Trained Model for Event Correlation Reasoning

no code implementations13 Oct 2021 Yucheng Zhou, Xiubo Geng, Tao Shen, Guodong Long, Daxin Jiang

Event correlation reasoning infers whether a natural language paragraph containing multiple events conforms to human common sense.

Cloze Test Common Sense Reasoning +1

Building an Efficient and Effective Retrieval-based Dialogue System via Mutual Learning

no code implementations1 Oct 2021 Chongyang Tao, Jiazhan Feng, Chang Liu, Juntao Li, Xiubo Geng, Daxin Jiang

For this task, the adoption of pre-trained language models (such as BERT) has led to remarkable progress in a number of benchmarks.

Re-Ranking Retrieval

Learning to Ground Visual Objects for Visual Dialog

no code implementations Findings (EMNLP) 2021 Feilong Chen, Xiuyi Chen, Can Xu, Daxin Jiang

Specifically, a posterior distribution over visual objects is inferred from both context (history and questions) and answers, and it ensures the appropriate grounding of visual objects during the training process.

Visual Dialog

Learning from Multiple Noisy Augmented Data Sets for Better Cross-Lingual Spoken Language Understanding

no code implementations EMNLP 2021 YingMei Guo, Linjun Shou, Jian Pei, Ming Gong, Mingxing Xu, Zhiyong Wu, Daxin Jiang

Although various data augmentation approaches have been proposed to synthesize training data in low-resource target languages, the augmented data sets are often noisy, and thus impede the performance of SLU models.

Data Augmentation Denoising +1

Smart Bird: Learnable Sparse Attention for Efficient and Effective Transformer

no code implementations20 Aug 2021 Chuhan Wu, Fangzhao Wu, Tao Qi, Binxing Jiao, Daxin Jiang, Yongfeng Huang, Xing Xie

We then sample token pairs based on their probability scores derived from the sketched attention matrix to generate different sparse attention index matrices for different attention heads.

Reasoning over Entity-Action-Location Graph for Procedural Text Understanding

no code implementations ACL 2021 Hao Huang, Xiubo Geng, Jian Pei, Guodong Long, Daxin Jiang

Procedural text understanding aims at tracking the states (e. g., create, move, destroy) and locations of the entities mentioned in a given paragraph.

graph construction Graph Neural Network +2

Neural Rule-Execution Tracking Machine For Transformer-Based Text Generation

no code implementations NeurIPS 2021 YuFei Wang, Can Xu, Huang Hu, Chongyang Tao, Stephen Wan, Mark Dras, Mark Johnson, Daxin Jiang

Sequence-to-Sequence (S2S) neural text generation models, especially the pre-trained ones (e. g., BART and T5), have exhibited compelling performance on various natural language generation tasks.

Text Generation

Language Scaling for Universal Suggested Replies Model

no code implementations NAACL 2021 Qianlan Ying, Payal Bajaj, Budhaditya Deb, Yu Yang, Wei Wang, Bojia Lin, Milad Shokouhi, Xia Song, Yang Yang, Daxin Jiang

Faced with increased compute requirements and low resources for language expansion, we build a single universal model for improving the quality and reducing run-time costs of our production system.

Continual Learning Cross-Lingual Transfer

MPC-BERT: A Pre-Trained Language Model for Multi-Party Conversation Understanding

1 code implementation ACL 2021 Jia-Chen Gu, Chongyang Tao, Zhen-Hua Ling, Can Xu, Xiubo Geng, Daxin Jiang

Recently, various neural models for multi-party conversation (MPC) have achieved impressive improvements on a variety of tasks such as addressee recognition, speaker identification and response prediction.

Language Modelling Speaker Identification

Improving Zero-Shot Cross-lingual Transfer for Multilingual Question Answering over Knowledge Graph

no code implementations NAACL 2021 Yucheng Zhou, Xiubo Geng, Tao Shen, Wenqiang Zhang, Daxin Jiang

That is, we can only access training data in a high-resource language, while need to answer multilingual questions without any labeled data in target languages.

Bilingual Lexicon Induction Question Answering +1

Maria: A Visual Experience Powered Conversational Agent

1 code implementation ACL 2021 Zujie Liang, Huang Hu, Can Xu, Chongyang Tao, Xiubo Geng, Yining Chen, Fan Liang, Daxin Jiang

The retriever aims to retrieve a correlated image to the dialog from an image index, while the visual concept detector extracts rich visual knowledge from the image.

Integrating Pre-trained Model into Rule-based Dialogue Management

no code implementations17 Feb 2021 Jun Quan, Meng Yang, Qiang Gan, Deyi Xiong, Yiming Liu, Yuchen Dong, Fangxin Ouyang, Jun Tian, Ruiling Deng, Yongzhi Li, Yang Yang, Daxin Jiang

Rule-based dialogue management is still the most popular solution for industrial task-oriented dialogue systems for their interpretablility.

Dialogue Management Management +1

ChemistryQA: A Complex Question Answering Dataset from Chemistry

no code implementations1 Jan 2021 Zhuoyu Wei, Wei Ji, Xiubo Geng, Yining Chen, Baihua Chen, Tao Qin, Daxin Jiang

We notice that some real-world QA tasks are more complex, which cannot be solved by end-to-end neural networks or translated to any kind of formal representations.

Machine Reading Comprehension Math +1

Syntax-Enhanced Pre-trained Model

1 code implementation ACL 2021 Zenan Xu, Daya Guo, Duyu Tang, Qinliang Su, Linjun Shou, Ming Gong, Wanjun Zhong, Xiaojun Quan, Nan Duan, Daxin Jiang

We study the problem of leveraging the syntactic structure of text to enhance pre-trained models such as BERT and RoBERTa.

Entity Typing Question Answering +1

Reinforced Multi-Teacher Selection for Knowledge Distillation

no code implementations11 Dec 2020 Fei Yuan, Linjun Shou, Jian Pei, Wutao Lin, Ming Gong, Yan Fu, Daxin Jiang

When multiple teacher models are available in distillation, the state-of-the-art methods assign a fixed weight to a teacher model in the whole distillation.

Knowledge Distillation Model Compression

CalibreNet: Calibration Networks for Multilingual Sequence Labeling

no code implementations11 Nov 2020 Shining Liang, Linjun Shou, Jian Pei, Ming Gong, Wanli Zuo, Daxin Jiang

To tackle the challenge of lack of training data in low-resource languages, we dedicatedly develop a novel unsupervised phrase boundary recovery pre-training task to enhance the multilingual boundary detection capability of CalibreNet.

Boundary Detection Cross-Lingual NER +4

Cross-lingual Machine Reading Comprehension with Language Branch Knowledge Distillation

no code implementations COLING 2020 Junhao Liu, Linjun Shou, Jian Pei, Ming Gong, Min Yang, Daxin Jiang

Then, we devise a multilingual distillation approach to amalgamate knowledge from multiple language branch models to a single model for all target languages.

Knowledge Distillation Machine Reading Comprehension +1

A Graph Representation of Semi-structured Data for Web Question Answering

no code implementations COLING 2020 Xingyao Zhang, Linjun Shou, Jian Pei, Ming Gong, Lijie Wen, Daxin Jiang

The abundant semi-structured data on the Web, such as HTML-based tables and lists, provide commercial search engines a rich information source for question answering (QA).

Question Answering

Towards Interpretable Reasoning over Paragraph Effects in Situation

1 code implementation EMNLP 2020 Mucheng Ren, Xiubo Geng, Tao Qin, Heyan Huang, Daxin Jiang

We focus on the task of reasoning over paragraph effects in situation, which requires a model to understand the cause and effect described in a background paragraph, and apply the knowledge to a novel situation.

Knowledge-Aware Procedural Text Understanding with Multi-Stage Training

no code implementations28 Sep 2020 Zhihan Zhang, Xiubo Geng, Tao Qin, Yunfang Wu, Daxin Jiang

In this work, we focus on the task of procedural text understanding, which aims to comprehend such documents and track entities' states and locations during a process.

Procedural Text Understanding

No Answer is Better Than Wrong Answer: A Reflection Model for Document Level Machine Reading Comprehension

no code implementations Findings of the Association for Computational Linguistics 2020 Xuguang Wang, Linjun Shou, Ming Gong, Nan Duan, Daxin Jiang

The Natural Questions (NQ) benchmark set brings new challenges to Machine Reading Comprehension: the answers are not only at different levels of granularity (long and short), but also of richer types (including no-answer, yes/no, single-span and multi-span).

Machine Reading Comprehension Natural Questions

Difference-aware Knowledge Selection for Knowledge-grounded Conversation Generation

1 code implementation Findings of the Association for Computational Linguistics 2020 Chujie Zheng, Yunbo Cao, Daxin Jiang, Minlie Huang

In a multi-turn knowledge-grounded dialog, the difference between the knowledge selected at different turns usually provides potential clues to knowledge selection, which has been largely neglected in previous research.

GraphCodeBERT: Pre-training Code Representations with Data Flow

1 code implementation ICLR 2021 Daya Guo, Shuo Ren, Shuai Lu, Zhangyin Feng, Duyu Tang, Shujie Liu, Long Zhou, Nan Duan, Alexey Svyatkovskiy, Shengyu Fu, Michele Tufano, Shao Kun Deng, Colin Clement, Dawn Drain, Neel Sundaresan, Jian Yin, Daxin Jiang, Ming Zhou

Instead of taking syntactic-level structure of code like abstract syntax tree (AST), we use data flow in the pre-training stage, which is a semantic-level structure of code that encodes the relation of "where-the-value-comes-from" between variables.

Clone Detection Code Completion +7

Learning an Effective Context-Response Matching Model with Self-Supervised Tasks for Retrieval-based Dialogues

no code implementations14 Sep 2020 Ruijian Xu, Chongyang Tao, Daxin Jiang, Xueliang Zhao, Dongyan Zhao, Rui Yan

To address these issues, in this paper, we propose learning a context-response matching model with auxiliary self-supervised tasks designed for the dialogue data based on pre-trained language models.

Conversational Response Selection Retrieval

Evidence-Aware Inferential Text Generation with Vector Quantised Variational AutoEncoder

1 code implementation ACL 2020 Daya Guo, Duyu Tang, Nan Duan, Jian Yin, Daxin Jiang, Ming Zhou

Generating inferential texts about an event in different perspectives requires reasoning over different contexts that the event occurs.

Common Sense Reasoning Decoder +1

Mining Implicit Relevance Feedback from User Behavior for Web Question Answering

no code implementations13 Jun 2020 Linjun Shou, Shining Bo, Feixiang Cheng, Ming Gong, Jian Pei, Daxin Jiang

In this paper, we make the first study to explore the correlation between user behavior and passage relevance, and propose a novel approach for mining training data for Web QA.

Passage Ranking Question Answering

Dance Revolution: Long-Term Dance Generation with Music via Curriculum Learning

no code implementations ICLR 2021 Ruozi Huang, Huang Hu, Wei Wu, Kei Sawada, Mi Zhang, Daxin Jiang

In this paper, we formalize the music-conditioned dance generation as a sequence-to-sequence learning problem and devise a novel seq2seq architecture to efficiently process long sequences of music features and capture the fine-grained correspondence between music and dance.

Motion Synthesis Pose Estimation

Document Modeling with Graph Attention Networks for Multi-grained Machine Reading Comprehension

1 code implementation ACL 2020 Bo Zheng, Haoyang Wen, Yaobo Liang, Nan Duan, Wanxiang Che, Daxin Jiang, Ming Zhou, Ting Liu

Natural Questions is a new challenging machine reading comprehension benchmark with two-grained answers, which are a long answer (typically a paragraph) and a short answer (one or more entities inside the long answer).

Graph Attention Machine Reading Comprehension +1

RikiNet: Reading Wikipedia Pages for Natural Question Answering

no code implementations ACL 2020 Dayiheng Liu, Yeyun Gong, Jie Fu, Yu Yan, Jiusheng Chen, Daxin Jiang, Jiancheng Lv, Nan Duan

The representations are then fed into the predictor to obtain the span of the short answer, the paragraph of the long answer, and the answer type in a cascaded manner.

Natural Language Understanding Natural Questions +1

Enhancing Answer Boundary Detection for Multilingual Machine Reading Comprehension

no code implementations ACL 2020 Fei Yuan, Linjun Shou, Xuanyu Bai, Ming Gong, Yaobo Liang, Nan Duan, Yan Fu, Daxin Jiang

Multilingual pre-trained models could leverage the training data from a rich source language (such as English) to improve performance on low resource languages.

Boundary Detection Machine Reading Comprehension +2

Pre-training Text Representations as Meta Learning

no code implementations12 Apr 2020 Shangwen Lv, Yuechen Wang, Daya Guo, Duyu Tang, Nan Duan, Fuqing Zhu, Ming Gong, Linjun Shou, Ryan Ma, Daxin Jiang, Guihong Cao, Ming Zhou, Songlin Hu

In this work, we introduce a learning algorithm which directly optimizes model's ability to learn text representations for effective learning of downstream tasks.

Language Modelling Meta-Learning +2

Diverse, Controllable, and Keyphrase-Aware: A Corpus and Method for News Multi-Headline Generation

1 code implementation EMNLP 2020 Dayiheng Liu, Yeyun Gong, Jie Fu, Wei Liu, Yu Yan, Bo Shao, Daxin Jiang, Jiancheng Lv, Nan Duan

Furthermore, we propose a simple and effective method to mine the keyphrases of interest in the news article and build a first large-scale keyphrase-aware news headline corpus, which contains over 180K aligned triples of $<$news article, headline, keyphrase$>$.

Decoder Diversity +2

XGLUE: A New Benchmark Dataset for Cross-lingual Pre-training, Understanding and Generation

2 code implementations3 Apr 2020 Yaobo Liang, Nan Duan, Yeyun Gong, Ning Wu, Fenfei Guo, Weizhen Qi, Ming Gong, Linjun Shou, Daxin Jiang, Guihong Cao, Xiaodong Fan, Ruofei Zhang, Rahul Agrawal, Edward Cui, Sining Wei, Taroon Bharti, Ying Qiao, Jiun-Hung Chen, Winnie Wu, Shuguang Liu, Fan Yang, Daniel Campos, Rangan Majumder, Ming Zhou

In this paper, we introduce XGLUE, a new benchmark dataset that can be used to train large-scale cross-lingual pre-trained models using multilingual and bilingual corpora and evaluate their performance across a diverse set of cross-lingual tasks.

Natural Language Understanding XLM-R

DC-BERT: Decoupling Question and Document for Efficient Contextual Encoding

no code implementations28 Feb 2020 Yuyu Zhang, Ping Nie, Xiubo Geng, Arun Ramamurthy, Le Song, Daxin Jiang

Recent studies on open-domain question answering have achieved prominent performance improvement using pre-trained language models such as BERT.

Natural Questions Open-Domain Question Answering +1

Model Compression with Two-stage Multi-teacher Knowledge Distillation for Web Question Answering System

no code implementations18 Oct 2019 Ze Yang, Linjun Shou, Ming Gong, Wutao Lin, Daxin Jiang

The experiment results show that our method can significantly outperform the baseline methods and even achieve comparable results with the original teacher models, along with substantial speedup of model inference.

General Knowledge Knowledge Distillation +3

Neural Semantic Parsing in Low-Resource Settings with Back-Translation and Meta-Learning

no code implementations12 Sep 2019 Yibo Sun, Duyu Tang, Nan Duan, Yeyun Gong, Xiaocheng Feng, Bing Qin, Daxin Jiang

Neural semantic parsing has achieved impressive results in recent years, yet its success relies on the availability of large amounts of supervised data.

Meta-Learning Semantic Parsing +1

Unicoder-VL: A Universal Encoder for Vision and Language by Cross-modal Pre-training

no code implementations16 Aug 2019 Gen Li, Nan Duan, Yuejian Fang, Ming Gong, Daxin Jiang, Ming Zhou

We propose Unicoder-VL, a universal encoder that aims to learn joint representations of vision and language in a pre-training manner.

Ranked #5 on Image-to-Text Retrieval on MS COCO (Recall@10 metric)

Image-text matching Image-text Retrieval +5

NeuronBlocks: Building Your NLP DNN Models Like Playing Lego

2 code implementations IJCNLP 2019 Ming Gong, Linjun Shou, Wutao Lin, Zhijie Sang, Quanjia Yan, Ze Yang, Feixiang Cheng, Daxin Jiang

Deep Neural Networks (DNN) have been widely employed in industry to address various Natural Language Processing (NLP) tasks.

Model Compression with Multi-Task Knowledge Distillation for Web-scale Question Answering System

no code implementations21 Apr 2019 Ze Yang, Linjun Shou, Ming Gong, Wutao Lin, Daxin Jiang

Deep pre-training and fine-tuning models (like BERT, OpenAI GPT) have demonstrated excellent results in question answering areas.

Knowledge Distillation Model Compression +1

Assertion-based QA with Question-Aware Open Information Extraction

no code implementations23 Jan 2018 Zhao Yan, Duyu Tang, Nan Duan, Shujie Liu, Wendi Wang, Daxin Jiang, Ming Zhou, Zhoujun Li

We present assertion based question answering (ABQA), an open domain question answering task that takes a question and a passage as inputs, and outputs a semi-structured assertion consisting of a subject, a predicate and a list of arguments.

Learning-To-Rank Open-Domain Question Answering +2

Cannot find the paper you are looking for? You can Submit a new open access paper.