no code implementations • NAACL 2022 • Hao Huang, Xiubo Geng, Guodong Long, Daxin Jiang
Precise question understanding is critical for temporal reading comprehension.
no code implementations • COLING 2022 • Tao Shen, Xiubo Geng, Daxin Jiang
Besides a norm-grounding knowledge model, we present a novel norm-supported ethical judgment model in line with neural module networks to alleviate dilemma situations and improve norm-level explainability.
no code implementations • EMNLP 2020 • Yaobo Liang, Nan Duan, Yeyun Gong, Ning Wu, Fenfei Guo, Weizhen Qi, Ming Gong, Linjun Shou, Daxin Jiang, Guihong Cao, Xiaodong Fan, Ruofei Zhang, Rahul Agrawal, Edward Cui, Sining Wei, Taroon Bharti, Ying Qiao, Jiun-Hung Chen, Winnie Wu, Shuguang Liu, Fan Yang, Daniel Campos, Rangan Majumder, Ming Zhou
In this paper, we introduce XGLUE, a new benchmark dataset to train large-scale cross-lingual pre-trained models using multilingual and bilingual corpora, and evaluate their performance across a diverse set of cross-lingual tasks.
no code implementations • 4 Mar 2024 • Yifei Yang, Tianqiao Liu, Bo Shao, Hai Zhao, Linjun Shou, Ming Gong, Daxin Jiang
Webpage entity extraction is a fundamental natural language processing task in both research and applications.
1 code implementation • 6 Nov 2023 • Zilin Xiao, Ming Gong, Jie Wu, Xingyao Zhang, Linjun Shou, Jian Pei, Daxin Jiang
Generative approaches powered by large language models (LLMs) have demonstrated emergent abilities in tasks that require complex reasoning abilities.
no code implementations • 6 Nov 2023 • Zilin Xiao, Linjun Shou, Xingyao Zhang, Jie Wu, Ming Gong, Jian Pei, Daxin Jiang
We propose CoherentED, an ED system equipped with novel designs aimed at enhancing the coherence of entity predictions.
no code implementations • 19 Sep 2023 • Ning Wu, Ming Gong, Linjun Shou, Jian Pei, Daxin Jiang
RUEL is the first method that connects user browsing data with typical recommendation datasets and can be generalized to various recommendation scenarios and datasets.
1 code implementation • 28 Jul 2023 • Xindi Wang, YuFei Wang, Can Xu, Xiubo Geng, BoWen Zhang, Chongyang Tao, Frank Rudzicz, Robert E. Mercer, Daxin Jiang
Large language models (LLMs) have shown remarkable capacity for in-context learning (ICL), where learning a new task from just a few training examples is done without being explicitly pre-trained.
3 code implementations • 14 Jun 2023 • Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, QIngwei Lin, Daxin Jiang
Moreover, our model even outperforms the largest closed LLMs, Anthropic's Claude and Google's Bard, on HumanEval and HumanEval+.
Ranked #4 on Code Generation on CodeContests
1 code implementation • 24 May 2023 • Hao Sun, Xiao Liu, Yeyun Gong, Yan Zhang, Daxin Jiang, Linjun Yang, Nan Duan
With the advance of large language models (LLMs), the research field of LLM applications becomes more and more popular and the idea of constructing pipelines to accomplish complex tasks by stacking LLM API calls come true.
2 code implementations • 12 May 2023 • Jiazhan Feng, Chongyang Tao, Xiubo Geng, Tao Shen, Can Xu, Guodong Long, Dongyan Zhao, Daxin Jiang
Information retrieval (IR) plays a crucial role in locating relevant resources from vast amounts of data, and its applications have evolved from traditional knowledge bases to modern retrieval models (RMs).
1 code implementation • 9 May 2023 • Nuo Chen, Linjun Shou, Ming Gong, Jian Pei, Bowen Cao, Jianhui Chang, Daxin Jiang, Jia Li
Currently, learning better unsupervised sentence representations is the pursuit of many natural language processing communities.
1 code implementation • 8 May 2023 • Chenxiao Liu, Shuai Lu, Weizhu Chen, Daxin Jiang, Alexey Svyatkovskiy, Shengyu Fu, Neel Sundaresan, Nan Duan
Code execution is a fundamental aspect of programming language semantics that reflects the exact behavior of the code.
1 code implementation • 8 May 2023 • Ziyang Luo, Can Xu, Pu Zhao, Xiubo Geng, Chongyang Tao, Jing Ma, QIngwei Lin, Daxin Jiang
We demonstrate that our PKG framework can enhance the performance of "black-box" LLMs on a range of domain knowledge-intensive tasks that require factual (+7. 9%), tabular (+11. 9%), medical (+3. 0%), and multimodal (+8. 1%) knowledge.
no code implementations • 27 Apr 2023 • Tao Shen, Guodong Long, Xiubo Geng, Chongyang Tao, Tianyi Zhou, Daxin Jiang
In this work, we propose a simple method that applies a large language model (LLM) to large-scale retrieval in zero-shot scenarios.
4 code implementations • 24 Apr 2023 • Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, Daxin Jiang
In this paper, we show an avenue for creating large amounts of instruction data with varying levels of complexity using LLM instead of humans.
1 code implementation • 17 Apr 2023 • Shengyao Zhuang, Linjun Shou, Jian Pei, Ming Gong, Houxing Ren, Guido Zuccon, Daxin Jiang
To address this challenge, we propose ToRoDer (TypOs-aware bottlenecked pre-training for RObust DEnse Retrieval), a novel re-training strategy for DRs that increases their robustness to misspelled queries while preserving their effectiveness in downstream retrieval tasks.
2 code implementations • 10 Apr 2023 • Nan Yang, Tao Ge, Liang Wang, Binxing Jiao, Daxin Jiang, Linjun Yang, Rangan Majumder, Furu Wei
We propose LLMA, an LLM accelerator to losslessly speed up Large Language Model (LLM) inference with references.
no code implementations • 27 Mar 2023 • Ning Wu, Ming Gong, Linjun Shou, Shining Liang, Daxin Jiang
First, we propose to model objective and subjective dimensions of generated text based on roleplayers prompting mechanism.
no code implementations • 27 Mar 2023 • Houxing Ren, Linjun Shou, Jian Pei, Ning Wu, Ming Gong, Daxin Jiang
In this paper, we propose to mine and generate self-supervised training data based on a large-scale unlabeled corpus.
no code implementations • 27 Mar 2023 • Houxing Ren, Linjun Shou, Ning Wu, Ming Gong, Daxin Jiang
However, we find that the performance of the cross-encoder re-ranker is heavily influenced by the number of training samples and the quality of negative samples, which is hard to obtain in the cross-lingual setting.
no code implementations • 16 Feb 2023 • Nuo Chen, Linjun Shou, Ming Gong, Jian Pei, Chenyu You, Jianhui Chang, Daxin Jiang, Jia Li
For instance, TPLMs jointly pre-trained with table and text input could be effective for tasks also with table-text joint input like table question answering, but it may fail for tasks with only tables or text as input such as table retrieval.
1 code implementation • 6 Feb 2023 • Ziyang Luo, Pu Zhao, Can Xu, Xiubo Geng, Tao Shen, Chongyang Tao, Jing Ma, Qingwen Lin, Daxin Jiang
The conventional dense retrieval paradigm relies on encoding images and texts into dense representations using dual-stream encoders, however, it faces challenges with low retrieval speed in large-scale retrieval scenarios.
1 code implementation • 3 Feb 2023 • Shunyu Zhang, Yaobo Liang, Ming Gong, Daxin Jiang, Nan Duan
Specifically, we propose a multilingual PLM called masked sentence model (MSM), which consists of a sentence encoder to generate the sentence representations, and a document encoder applied to a sequence of sentence vectors from a document.
no code implementations • CVPR 2023 • Meng Cao, Fangyun Wei, Can Xu, Xiubo Geng, Long Chen, Can Zhang, Yuexian Zou, Tao Shen, Daxin Jiang
Weakly-Supervised Video Grounding (WSVG) aims to localize events of interest in untrimmed videos with only video-level annotations.
1 code implementation • ICCV 2023 • Ziyang Luo, Pu Zhao, Can Xu, Xiubo Geng, Tao Shen, Chongyang Tao, Jing Ma, QIngwei Lin, Daxin Jiang
To address this issue, we propose a novel sparse retrieval paradigm for ITR that exploits sparse representations in the vocabulary space for images and texts.
no code implementations • 20 Dec 2022 • Yucheng Zhou, Tao Shen, Xiubo Geng, Chongyang Tao, Guodong Long, Can Xu, Daxin Jiang
Long document retrieval aims to fetch query-relevant documents from a large-scale collection, where knowledge distillation has become de facto to improve a retriever by mimicking a heterogeneous yet powerful cross-encoder.
no code implementations • 20 Dec 2022 • Chongyang Tao, Chang Liu, Tao Shen, Can Xu, Xiubo Geng, Binxing Jiao, Daxin Jiang
Different from previous works that only rely on one positive and hard negatives as candidate passages, we create dark examples that all have moderate relevance to the query through mixing-up and masking in discrete space.
1 code implementation • 15 Dec 2022 • Kun Zhou, Xiao Liu, Yeyun Gong, Wayne Xin Zhao, Daxin Jiang, Nan Duan, Ji-Rong Wen
Pre-trained Transformers (\eg BERT) have been commonly used in existing dense retrieval methods for parameter initialization, and recent studies are exploring more effective pre-training tasks for further improving the quality of dense vectors.
1 code implementation • 7 Dec 2022 • Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei
This paper presents E5, a family of state-of-the-art text embeddings that transfer well to a wide range of tasks.
Ranked #11 on Only Connect Walls Dataset Task 1 (Grouping) on OCW (using extra training data)
no code implementations • 21 Nov 2022 • Qiushi Zhu, Long Zhou, Ziqiang Zhang, Shujie Liu, Binxing Jiao, Jie Zhang, LiRong Dai, Daxin Jiang, Jinyu Li, Furu Wei
Although speech is a simple and effective way for humans to communicate with the outside world, a more realistic speech interaction contains multimodal information, e. g., vision, text.
1 code implementation • 18 Oct 2022 • Xiaonan Li, Daya Guo, Yeyun Gong, Yun Lin, Yelong Shen, Xipeng Qiu, Daxin Jiang, Weizhu Chen, Nan Duan
In this paper, we present \textbf{SCodeR}, a \textbf{S}oft-labeled contrastive pre-training framework with two positive sample construction methods to learn functional-level \textbf{Code} \textbf{R}epresentation.
1 code implementation • 11 Oct 2022 • JunJie Huang, Wanjun Zhong, Qian Liu, Ming Gong, Daxin Jiang, Nan Duan
However, training an effective dense table-text retriever is difficult due to the challenges of table-text discrepancy and data sparsity problem.
1 code implementation • 27 Sep 2022 • Zhenghao Lin, Yeyun Gong, Xiao Liu, Hang Zhang, Chen Lin, Anlei Dong, Jian Jiao, Jingwen Lu, Daxin Jiang, Rangan Majumder, Nan Duan
It is common that a better teacher model results in a bad student via distillation due to the nonnegligible gap between teacher and student.
1 code implementation • 31 Aug 2022 • Tao Shen, Xiubo Geng, Chongyang Tao, Can Xu, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang
In large-scale retrieval, the lexicon-weighting paradigm, learning weighted sparse representations in vocabulary space, has shown promising results with high quality and low latency.
2 code implementations • 29 Aug 2022 • Kai Zhang, Chongyang Tao, Tao Shen, Can Xu, Xiubo Geng, Binxing Jiao, Daxin Jiang
The alignment is achieved by weakened knowledge distillations to enlighten the retriever via two aspects -- 1) a lexicon-augmented contrastive objective to challenge the dense encoder and 2) a pair-wise rank-consistent regularization to make dense model's behavior incline to the other.
1 code implementation • 6 Jul 2022 • Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei
It employs a simple bottleneck architecture that learns to compress the passage information into a dense vector through self-supervised pre-training.
1 code implementation • 6 Jul 2022 • Qianglong Chen, Xiangji Zeng, Jiangang Zhu, Yin Zhang, Bojia Lin, Yang Yang, Daxin Jiang
Gazetteer is widely used in Chinese named entity recognition (NER) to enhance span boundary detection and type classification.
1 code implementation • 21 Jun 2022 • Shengyao Zhuang, Houxing Ren, Linjun Shou, Jian Pei, Ming Gong, Guido Zuccon, Daxin Jiang
This problem is further exacerbated when using DSI for cross-lingual retrieval, where document text and query text are in different languages.
no code implementations • 21 Jun 2022 • YuFei Wang, Jiayi Zheng, Can Xu, Xiubo Geng, Tao Shen, Chongyang Tao, Daxin Jiang
This paper focuses on the data augmentation for low-resource NLP tasks where the training set is limited.
no code implementations • 16 Jun 2022 • Yucheng Zhou, Tao Shen, Xiubo Geng, Chongyang Tao, Can Xu, Guodong Long, Binxing Jiao, Daxin Jiang
A ranker plays an indispensable role in the de facto 'retrieval & rerank' pipeline, but its training still lags behind -- learning from moderate negatives or/and serving as an auxiliary module for a retriever.
1 code implementation • 7 Jun 2022 • Ning Wu, Yaobo Liang, Houxing Ren, Linjun Shou, Nan Duan, Ming Gong, Daxin Jiang
On the multilingual sentence retrieval task Tatoeba, our model achieves new SOTA results among methods without using bilingual data.
no code implementations • 1 Jun 2022 • Tianyu Chen, Shaohan Huang, Yuan Xie, Binxing Jiao, Daxin Jiang, Haoyi Zhou, JianXin Li, Furu Wei
The sparse Mixture-of-Experts (MoE) model is powerful for large-scale pre-training and has achieved promising results due to its model capacity.
no code implementations • Findings (ACL) 2022 • Tianyu Chen, Hangbo Bao, Shaohan Huang, Li Dong, Binxing Jiao, Daxin Jiang, Haoyi Zhou, JianXin Li, Furu Wei
As more and more pre-trained language models adopt on-cloud deployment, the privacy issues grow quickly, mainly for the exposure of plain-text user data (e. g., search history, medical record, bank account).
no code implementations • 1 Jun 2022 • Lanling Xu, Jianxun Lian, Wayne Xin Zhao, Ming Gong, Linjun Shou, Daxin Jiang, Xing Xie, Ji-Rong Wen
The learn-to-compare paradigm of contrastive representation learning (CRL), which compares positive samples with negative ones for representation learning, has achieved great success in a wide range of domains, including natural language processing, computer vision, information retrieval and graph learning.
no code implementations • 23 May 2022 • Tao Shen, Xiubo Geng, Chongyang Tao, Can Xu, Guodong Long, Kai Zhang, Daxin Jiang
Large-scale retrieval is to recall relevant documents from a huge collection given a query.
no code implementations • 7 May 2022 • Shining Liang, Linjun Shou, Jian Pei, Ming Gong, Wanli Zuo, Xianglin Zuo, Daxin Jiang
Despite the great success of spoken language understanding (SLU) in high-resource languages, it remains challenging in low-resource languages mainly due to the lack of labeled training data.
no code implementations • NAACL 2022 • Qingfeng Sun, Can Xu, Huang Hu, Yujing Wang, Jian Miao, Xiubo Geng, Yining Chen, Fei Xu, Daxin Jiang
(2) How to cohere with context and preserve the knowledge when generating a stylized response.
no code implementations • NAACL 2022 • Nuo Chen, Linjun Shou, Ming Gong, Jian Pei, Daxin Jiang
Large-scale cross-lingual pre-trained language models (xPLMs) have shown effectiveness in cross-lingual sequence labeling tasks (xSL), such as cross-lingual machine reading comprehension (xMRC) by transferring knowledge from a high-resource language to low-resource languages.
no code implementations • 2 Apr 2022 • Weizhe Lin, Linjun Shou, Ming Gong, Pei Jian, Zhilin Wang, Bill Byrne, Daxin Jiang
Knowledge graph (KG) based Collaborative Filtering is an effective approach to personalizing recommendation systems for relatively static domains such as movies and books, by leveraging structured information from KG to enrich both item and user representations.
no code implementations • 18 Mar 2022 • Jun Quan, Ze Wei, Qiang Gan, Jingqi Yao, Jingyi Lu, Yuchen Dong, Yiming Liu, Yi Zeng, Chao Zhang, Yongzhi Li, Huang Hu, Yingying He, Yang Yang, Daxin Jiang
The conversational recommender systems (CRSs) have received extensive attention in recent years.
no code implementations • ACL 2022 • Shunyu Zhang, Yaobo Liang, Ming Gong, Daxin Jiang, Nan Duan
Second, to prevent multi-view embeddings from collapsing to the same one, we further propose a global-local loss with annealed temperature to encourage the multiple viewers to better align with different potential queries.
1 code implementation • ACL 2022 • Jia-Chen Gu, Chao-Hong Tan, Chongyang Tao, Zhen-Hua Ling, Huang Hu, Xiubo Geng, Daxin Jiang
To address these challenges, we present HeterMPC, a heterogeneous graph-based neural network for response generation in MPCs which models the semantics of utterances and interlocutors simultaneously with two types of nodes in a graph.
1 code implementation • Findings (ACL) 2022 • Chao-Hong Tan, Jia-Chen Gu, Chongyang Tao, Zhen-Hua Ling, Can Xu, Huang Hu, Xiubo Geng, Daxin Jiang
To address the problem, we propose augmenting TExt Generation via Task-specific and Open-world Knowledge (TegTok) in a unified framework.
1 code implementation • ACL 2022 • Yucheng Zhou, Tao Shen, Xiubo Geng, Guodong Long, Daxin Jiang
Generating new events given context with correlated ones plays a crucial role in many event-centric reasoning tasks.
1 code implementation • ACL 2022 • YuFei Wang, Can Xu, Qingfeng Sun, Huang Hu, Chongyang Tao, Xiubo Geng, Daxin Jiang
This paper focuses on the Data Augmentation for low-resource Natural Language Understanding (NLU) tasks.
no code implementations • 10 Feb 2022 • Minheng Ni, Chenfei Wu, Haoyang Huang, Daxin Jiang, WangMeng Zuo, Nan Duan
Language guided image inpainting aims to fill in the defective regions of an image under the guidance of text while keeping non-defective regions unchanged.
1 code implementation • 28 Jan 2022 • Qiyu Wu, Chongyang Tao, Tao Shen, Can Xu, Xiubo Geng, Daxin Jiang
A straightforward solution is resorting to more diverse positives from a multi-augmenting strategy, while an open question remains about how to unsupervisedly learn from the diverse positives but with uneven augmenting qualities in the text field.
1 code implementation • 26 Jan 2022 • Xiaonan Li, Yeyun Gong, Yelong Shen, Xipeng Qiu, Hang Zhang, Bolun Yao, Weizhen Qi, Daxin Jiang, Weizhu Chen, Nan Duan
For bimodal contrastive learning, we leverage the documentation and in-line comments of code to build code-text pairs.
no code implementations • 9 Dec 2021 • Nuo Chen, Linjun Shou, Min Gong, Jian Pei, Daxin Jiang
Cross-lingual Machine Reading Comprehension (xMRC) is challenging due to the lack of training data in low-resource languages.
1 code implementation • 24 Nov 2021 • Chenfei Wu, Jian Liang, Lei Ji, Fan Yang, Yuejian Fang, Daxin Jiang, Nan Duan
To cover language, image, and video at the same time for different scenarios, a 3D transformer encoder-decoder framework is designed, which can not only deal with videos as 3D data but also adapt to texts and images as 1D and 2D data, respectively.
Ranked #1 on Text-to-Video Generation on Kinetics
no code implementations • 19 Nov 2021 • Yuntao Li, Can Xu, Huang Hu, Lei Sha, Yan Zhang, Daxin Jiang
The sequence representation plays a key role in the learning of matching degree between the dialogue context and the response.
no code implementations • ACL 2022 • Qingfeng Sun, Yujing Wang, Can Xu, Kai Zheng, Yaming Yang, Huang Hu, Fei Xu, Jessica Zhang, Xiubo Geng, Daxin Jiang
In such a low-resource setting, we devise a novel conversational agent, Divter, in order to isolate parameters that depend on multimodal dialogues from the entire generation model.
no code implementations • 14 Oct 2021 • Lingzhi Wang, Huang Hu, Lei Sha, Can Xu, Kam-Fai Wong, Daxin Jiang
Furthermore, we propose to evaluate the CRS models in an end-to-end manner, which can reflect the overall performance of the entire system rather than the performance of individual modules, compared to the separate evaluations of the two modules used in previous work.
no code implementations • 13 Oct 2021 • Yucheng Zhou, Xiubo Geng, Tao Shen, Guodong Long, Daxin Jiang
Event correlation reasoning infers whether a natural language paragraph containing multiple events conforms to human common sense.
no code implementations • 1 Oct 2021 • Chongyang Tao, Jiazhan Feng, Chang Liu, Juntao Li, Xiubo Geng, Daxin Jiang
For this task, the adoption of pre-trained language models (such as BERT) has led to remarkable progress in a number of benchmarks.
1 code implementation • EMNLP 2021 • Zujie Liang, Huang Hu, Can Xu, Jian Miao, Yingying He, Yining Chen, Xiubo Geng, Fan Liang, Daxin Jiang
Second, only the items mentioned in the training corpus have a chance to be recommended in the conversation.
1 code implementation • ACL 2022 • Wei Chen, Yeyun Gong, Can Xu, Huang Hu, Bolun Yao, Zhongyu Wei, Zhihao Fan, Xiaowu Hu, Bartuer Zhou, Biao Cheng, Daxin Jiang, Nan Duan
We study the problem of coarse-grained response selection in retrieval-based dialogue systems.
no code implementations • Findings (EMNLP) 2021 • Feilong Chen, Xiuyi Chen, Can Xu, Daxin Jiang
Specifically, a posterior distribution over visual objects is inferred from both context (history and questions) and answers, and it ensures the appropriate grounding of visual objects during the training process.
1 code implementation • Findings (EMNLP) 2021 • Lingzhi Wang, Xingshan Zeng, Huang Hu, Kam-Fai Wong, Daxin Jiang
In recent years, world business in online discussions and opinion sharing on social media is booming.
no code implementations • EMNLP 2021 • YingMei Guo, Linjun Shou, Jian Pei, Ming Gong, Mingxing Xu, Zhiyong Wu, Daxin Jiang
Although various data augmentation approaches have been proposed to synthesize training data in low-resource target languages, the augmented data sets are often noisy, and thus impede the performance of SLU models.
no code implementations • 20 Aug 2021 • Chuhan Wu, Fangzhao Wu, Tao Qi, Binxing Jiao, Daxin Jiang, Yongfeng Huang, Xing Xie
We then sample token pairs based on their probability scores derived from the sketched attention matrix to generate different sparse attention index matrices for different attention heads.
no code implementations • ACL 2021 • Hao Huang, Xiubo Geng, Jian Pei, Guodong Long, Daxin Jiang
Procedural text understanding aims at tracking the states (e. g., create, move, destroy) and locations of the entities mentioned in a given paragraph.
no code implementations • NeurIPS 2021 • YuFei Wang, Can Xu, Huang Hu, Chongyang Tao, Stephen Wan, Mark Dras, Mark Johnson, Daxin Jiang
Sequence-to-Sequence (S2S) neural text generation models, especially the pre-trained ones (e. g., BART and T5), have exhibited compelling performance on various natural language generation tasks.
no code implementations • NAACL 2021 • Qianlan Ying, Payal Bajaj, Budhaditya Deb, Yu Yang, Wei Wang, Bojia Lin, Milad Shokouhi, Xia Song, Yang Yang, Daxin Jiang
Faced with increased compute requirements and low resources for language expansion, we build a single universal model for improving the quality and reducing run-time costs of our production system.
1 code implementation • ACL 2021 • Jia-Chen Gu, Chongyang Tao, Zhen-Hua Ling, Can Xu, Xiubo Geng, Daxin Jiang
Recently, various neural models for multi-party conversation (MPC) have achieved impressive improvements on a variety of tasks such as addressee recognition, speaker identification and response prediction.
no code implementations • NAACL 2021 • Yucheng Zhou, Xiubo Geng, Tao Shen, Wenqiang Zhang, Daxin Jiang
That is, we can only access training data in a high-resource language, while need to answer multilingual questions without any labeled data in target languages.
no code implementations • 1 Jun 2021 • Shining Liang, Ming Gong, Jian Pei, Linjun Shou, Wanli Zuo, Xianglin Zuo, Daxin Jiang
Named entity recognition (NER) is a fundamental component in many applications, such as Web Search and Voice Assistants.
1 code implementation • ACL 2021 • JunJie Huang, Duyu Tang, Linjun Shou, Ming Gong, Ke Xu, Daxin Jiang, Ming Zhou, Nan Duan
Finding codes given natural language query isb eneficial to the productivity of software developers.
1 code implementation • ACL 2021 • Zujie Liang, Huang Hu, Can Xu, Chongyang Tao, Xiubo Geng, Yining Chen, Fan Liang, Daxin Jiang
The retriever aims to retrieve a correlated image to the dialog from an image index, while the visual concept detector extracts rich visual knowledge from the image.
2 code implementations • Findings (ACL) 2022 • Siyuan Wang, Wanjun Zhong, Duyu Tang, Zhongyu Wei, Zhihao Fan, Daxin Jiang, Ming Zhou, Nan Duan
Logical reasoning of text requires understanding critical logical information in the text and performing inference over them.
Ranked #7 on Reading Comprehension on ReClor
3 code implementations • ACL 2021 • Weizhen Qi, Yeyun Gong, Yu Yan, Can Xu, Bolun Yao, Bartuer Zhou, Biao Cheng, Daxin Jiang, Jiusheng Chen, Ruofei Zhang, Houqiang Li, Nan Duan
ProphetNet is a pre-training based natural language generation method which shows powerful performance on English text summarization and question generation tasks.
1 code implementation • Findings (EMNLP) 2021 • JunJie Huang, Duyu Tang, Wanjun Zhong, Shuai Lu, Linjun Shou, Ming Gong, Daxin Jiang, Nan Duan
In this work, we conduct a thorough examination of pretrained model based unsupervised sentence embeddings.
no code implementations • 17 Feb 2021 • Jun Quan, Meng Yang, Qiang Gan, Deyi Xiong, Yiming Liu, Yuchen Dong, Fangxin Ouyang, Jun Tian, Ruiling Deng, Yongzhi Li, Yang Yang, Daxin Jiang
Rule-based dialogue management is still the most popular solution for industrial task-oriented dialogue systems for their interpretablility.
4 code implementations • 9 Feb 2021 • Shuai Lu, Daya Guo, Shuo Ren, JunJie Huang, Alexey Svyatkovskiy, Ambrosio Blanco, Colin Clement, Dawn Drain, Daxin Jiang, Duyu Tang, Ge Li, Lidong Zhou, Linjun Shou, Long Zhou, Michele Tufano, Ming Gong, Ming Zhou, Nan Duan, Neel Sundaresan, Shao Kun Deng, Shengyu Fu, Shujie Liu
Benchmark datasets have a significant impact on accelerating research in programming language tasks.
Ranked #1 on Cloze Test on CodeXGLUE - CT-maxmin
no code implementations • 1 Jan 2021 • Zhuoyu Wei, Wei Ji, Xiubo Geng, Yining Chen, Baihua Chen, Tao Qin, Daxin Jiang
We notice that some real-world QA tasks are more complex, which cannot be solved by end-to-end neural networks or translated to any kind of formal representations.
1 code implementation • ACL 2021 • Zenan Xu, Daya Guo, Duyu Tang, Qinliang Su, Linjun Shou, Ming Gong, Wanjun Zhong, Xiaojun Quan, Nan Duan, Daxin Jiang
We study the problem of leveraging the syntactic structure of text to enhance pre-trained models such as BERT and RoBERTa.
no code implementations • 11 Dec 2020 • Fei Yuan, Linjun Shou, Jian Pei, Wutao Lin, Ming Gong, Yan Fu, Daxin Jiang
When multiple teacher models are available in distillation, the state-of-the-art methods assign a fixed weight to a teacher model in the whole distillation.
1 code implementation • Findings (ACL) 2021 • Dayiheng Liu, Yu Yan, Yeyun Gong, Weizhen Qi, Hang Zhang, Jian Jiao, Weizhu Chen, Jie Fu, Linjun Shou, Ming Gong, Pengcheng Wang, Jiusheng Chen, Daxin Jiang, Jiancheng Lv, Ruofei Zhang, Winnie Wu, Ming Zhou, Nan Duan
Multi-task benchmarks such as GLUE and SuperGLUE have driven great progress of pretraining and transfer learning in Natural Language Processing (NLP).
no code implementations • 11 Nov 2020 • Shining Liang, Linjun Shou, Jian Pei, Ming Gong, Wanli Zuo, Daxin Jiang
To tackle the challenge of lack of training data in low-resource languages, we dedicatedly develop a novel unsupervised phrase boundary recovery pre-training task to enhance the multilingual boundary detection capability of CalibreNet.
no code implementations • COLING 2020 • Junhao Liu, Linjun Shou, Jian Pei, Ming Gong, Min Yang, Daxin Jiang
Then, we devise a multilingual distillation approach to amalgamate knowledge from multiple language branch models to a single model for all target languages.
no code implementations • COLING 2020 • Xingyao Zhang, Linjun Shou, Jian Pei, Ming Gong, Lijie Wen, Daxin Jiang
The abundant semi-structured data on the Web, such as HTML-based tables and lists, provide commercial search engines a rich information source for question answering (QA).
1 code implementation • EMNLP 2020 • Mucheng Ren, Xiubo Geng, Tao Qin, Heyan Huang, Daxin Jiang
We focus on the task of reasoning over paragraph effects in situation, which requires a model to understand the cause and effect described in a background paragraph, and apply the knowledge to a novel situation.
no code implementations • 28 Sep 2020 • Zhihan Zhang, Xiubo Geng, Tao Qin, Yunfang Wu, Daxin Jiang
In this work, we focus on the task of procedural text understanding, which aims to comprehend such documents and track entities' states and locations during a process.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Xuguang Wang, Linjun Shou, Ming Gong, Nan Duan, Daxin Jiang
The Natural Questions (NQ) benchmark set brings new challenges to Machine Reading Comprehension: the answers are not only at different levels of granularity (long and short), but also of richer types (including no-answer, yes/no, single-span and multi-span).
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Huaishao Luo, Lei Ji, Tianrui Li, Nan Duan, Daxin Jiang
Specifically, a cascaded labeling module is developed to enhance the interchange between aspect terms and improve the attention of sentiment tokens when labeling sentiment polarities.
Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +4
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Chujie Zheng, Yunbo Cao, Daxin Jiang, Minlie Huang
In a multi-turn knowledge-grounded dialog, the difference between the knowledge selected at different turns usually provides potential clues to knowledge selection, which has been largely neglected in previous research.
1 code implementation • ICLR 2021 • Daya Guo, Shuo Ren, Shuai Lu, Zhangyin Feng, Duyu Tang, Shujie Liu, Long Zhou, Nan Duan, Alexey Svyatkovskiy, Shengyu Fu, Michele Tufano, Shao Kun Deng, Colin Clement, Dawn Drain, Neel Sundaresan, Jian Yin, Daxin Jiang, Ming Zhou
Instead of taking syntactic-level structure of code like abstract syntax tree (AST), we use data flow in the pre-training stage, which is a semantic-level structure of code that encodes the relation of "where-the-value-comes-from" between variables.
Ranked #3 on Type prediction on ManyTypes4TypeScript
no code implementations • 14 Sep 2020 • Ruijian Xu, Chongyang Tao, Daxin Jiang, Xueliang Zhao, Dongyan Zhao, Rui Yan
To address these issues, in this paper, we propose learning a context-response matching model with auxiliary self-supervised tasks designed for the dialogue data based on pre-trained language models.
Ranked #5 on Conversational Response Selection on E-commerce
1 code implementation • 24 Aug 2020 • Mengyu Zhou, Qingtao Li, Xinyi He, Yuejiang Li, Yibo Liu, Wei Ji, Shi Han, Yining Chen, Daxin Jiang, Dongmei Zhang
It is common for people to create different types of charts to explore a multi-dimensional dataset (table).
1 code implementation • ACL 2020 • Daya Guo, Duyu Tang, Nan Duan, Jian Yin, Daxin Jiang, Ming Zhou
Generating inferential texts about an event in different perspectives requires reasoning over different contexts that the event occurs.
Ranked #1 on Common Sense Reasoning on Event2Mind test (BLEU metric)
no code implementations • 13 Jun 2020 • Linjun Shou, Shining Bo, Feixiang Cheng, Ming Gong, Jian Pei, Daxin Jiang
In this paper, we make the first study to explore the correlation between user behavior and passage relevance, and propose a novel approach for mining training data for Web QA.
no code implementations • ICLR 2021 • Ruozi Huang, Huang Hu, Wei Wu, Kei Sawada, Mi Zhang, Daxin Jiang
In this paper, we formalize the music-conditioned dance generation as a sequence-to-sequence learning problem and devise a novel seq2seq architecture to efficiently process long sequences of music features and capture the fine-grained correspondence between music and dance.
Ranked #1 on Motion Synthesis on BRACE
1 code implementation • ACL 2020 • Bo Zheng, Haoyang Wen, Yaobo Liang, Nan Duan, Wanxiang Che, Daxin Jiang, Ming Zhou, Ting Liu
Natural Questions is a new challenging machine reading comprehension benchmark with two-grained answers, which are a long answer (typically a paragraph) and a short answer (one or more entities inside the long answer).
no code implementations • ACL 2020 • Dayiheng Liu, Yeyun Gong, Jie Fu, Yu Yan, Jiusheng Chen, Daxin Jiang, Jiancheng Lv, Nan Duan
The representations are then fed into the predictor to obtain the span of the short answer, the paragraph of the long answer, and the answer type in a cascaded manner.
no code implementations • EMNLP 2020 • Ruize Wang, Duyu Tang, Nan Duan, Wanjun Zhong, Zhongyu Wei, Xuanjing Huang, Daxin Jiang, Ming Zhou
We study the detection of propagandistic text fragments in news articles.
no code implementations • ACL 2020 • Fei Yuan, Linjun Shou, Xuanyu Bai, Ming Gong, Yaobo Liang, Nan Duan, Yan Fu, Daxin Jiang
Multilingual pre-trained models could leverage the training data from a rich source language (such as English) to improve performance on low resource languages.
no code implementations • ACL 2020 • Wanjun Zhong, Duyu Tang, Zhangyin Feng, Nan Duan, Ming Zhou, Ming Gong, Linjun Shou, Daxin Jiang, Jiahai Wang, Jian Yin
The graph is used to obtain graph-enhanced contextual representations of words in Transformer-based architecture.
no code implementations • 12 Apr 2020 • Shangwen Lv, Yuechen Wang, Daya Guo, Duyu Tang, Nan Duan, Fuqing Zhu, Ming Gong, Linjun Shou, Ryan Ma, Daxin Jiang, Guihong Cao, Ming Zhou, Songlin Hu
In this work, we introduce a learning algorithm which directly optimizes model's ability to learn text representations for effective learning of downstream tasks.
1 code implementation • EMNLP 2020 • Dayiheng Liu, Yeyun Gong, Jie Fu, Wei Liu, Yu Yan, Bo Shao, Daxin Jiang, Jiancheng Lv, Nan Duan
Furthermore, we propose a simple and effective method to mine the keyphrases of interest in the news article and build a first large-scale keyphrase-aware news headline corpus, which contains over 180K aligned triples of $<$news article, headline, keyphrase$>$.
no code implementations • 7 Apr 2020 • Daya Guo, Akari Asai, Duyu Tang, Nan Duan, Ming Gong, Linjun Shou, Daxin Jiang, Jian Yin, Ming Zhou
In this work, we use multiple knowledge sources as fuels for the model.
2 code implementations • 3 Apr 2020 • Yaobo Liang, Nan Duan, Yeyun Gong, Ning Wu, Fenfei Guo, Weizhen Qi, Ming Gong, Linjun Shou, Daxin Jiang, Guihong Cao, Xiaodong Fan, Ruofei Zhang, Rahul Agrawal, Edward Cui, Sining Wei, Taroon Bharti, Ying Qiao, Jiun-Hung Chen, Winnie Wu, Shuguang Liu, Fan Yang, Daniel Campos, Rangan Majumder, Ming Zhou
In this paper, we introduce XGLUE, a new benchmark dataset that can be used to train large-scale cross-lingual pre-trained models using multilingual and bilingual corpora and evaluate their performance across a diverse set of cross-lingual tasks.
no code implementations • 28 Feb 2020 • Yuyu Zhang, Ping Nie, Xiubo Geng, Arun Ramamurthy, Le Song, Daxin Jiang
Recent studies on open-domain question answering have achieved prominent performance improvement using pre-trained language models such as BERT.
8 code implementations • Findings of the Association for Computational Linguistics 2020 • Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, Ming Zhou
Results show that CodeBERT achieves state-of-the-art performance on both natural language code search and code documentation generation tasks.
Ranked #1 on Code Documentation Generation on CodeSearchNet - Go
2 code implementations • Findings (ACL) 2021 • Ruize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xuanjing Huang, Jianshu ji, Guihong Cao, Daxin Jiang, Ming Zhou
We study the problem of injecting knowledge into large pre-trained models like BERT and RoBERTa.
Ranked #1 on Entity Typing on Open Entity
no code implementations • 18 Oct 2019 • Ze Yang, Linjun Shou, Ming Gong, Wutao Lin, Daxin Jiang
The experiment results show that our method can significantly outperform the baseline methods and even achieve comparable results with the original teacher models, along with substantial speedup of model inference.
1 code implementation • IJCNLP 2019 • Tao Shen, Xiubo Geng, Tao Qin, Daya Guo, Duyu Tang, Nan Duan, Guodong Long, Daxin Jiang
We consider the problem of conversational question answering over a large-scale knowledge base.
no code implementations • 12 Sep 2019 • Yibo Sun, Duyu Tang, Nan Duan, Yeyun Gong, Xiaocheng Feng, Bing Qin, Daxin Jiang
Neural semantic parsing has achieved impressive results in recent years, yet its success relies on the availability of large amounts of supervised data.
1 code implementation • 9 Sep 2019 • Shangwen Lv, Daya Guo, Jingjing Xu, Duyu Tang, Nan Duan, Ming Gong, Linjun Shou, Daxin Jiang, Guihong Cao, Songlin Hu
In this work, we propose to automatically extract evidence from heterogeneous knowledge sources, and answer questions based on the extracted evidence.
Ranked #14 on Common Sense Reasoning on CommonsenseQA
no code implementations • 6 Sep 2019 • Tao Shen, Xiubo Geng, Tao Qin, Guodong Long, Jing Jiang, Daxin Jiang
These two problems lead to a poorly-trained semantic parsing model.
no code implementations • IJCNLP 2019 • Haoyang Huang, Yaobo Liang, Nan Duan, Ming Gong, Linjun Shou, Daxin Jiang, Ming Zhou
On XNLI, 1. 8% averaged accuracy improvement (on 15 languages) is obtained.
Cross-Lingual Natural Language Inference Cross-Lingual Question Answering +1
no code implementations • 16 Aug 2019 • Gen Li, Nan Duan, Yuejian Fang, Ming Gong, Daxin Jiang, Ming Zhou
We propose Unicoder-VL, a universal encoder that aims to learn joint representations of vision and language in a pre-training manner.
Ranked #5 on Image-to-Text Retrieval on MS COCO (Recall@10 metric)
no code implementations • ACL 2019 • Changzhi Sun, Yeyun Gong, Yuanbin Wu, Ming Gong, Daxin Jiang, Man Lan, Shiliang Sun, Nan Duan
We develop a new paradigm for the task of joint entity relation extraction.
Ranked #1 on Relation Extraction on ACE 2005 (Sentence Encoder metric)
2 code implementations • IJCNLP 2019 • Ming Gong, Linjun Shou, Wutao Lin, Zhijie Sang, Quanjia Yan, Ze Yang, Feixiang Cheng, Daxin Jiang
Deep Neural Networks (DNN) have been widely employed in industry to address various Natural Language Processing (NLP) tasks.
no code implementations • 21 Apr 2019 • Ze Yang, Linjun Shou, Ming Gong, Wutao Lin, Daxin Jiang
Deep pre-training and fine-tuning models (like BERT, OpenAI GPT) have demonstrated excellent results in question answering areas.
no code implementations • 23 Jan 2018 • Zhao Yan, Duyu Tang, Nan Duan, Shujie Liu, Wendi Wang, Daxin Jiang, Ming Zhou, Zhoujun Li
We present assertion based question answering (ABQA), an open domain question answering task that takes a question and a passage as inputs, and outputs a semi-structured assertion consisting of a subject, a predicate and a list of arguments.