1 code implementation • COLING 2022 • Weixiang Zhao, Yanyan Zhao, Bing Qin
Specifically, two detachment ways are devised to perform context and speaker-specific modeling within detached threads and they are bridged through a mutual module.
1 code implementation • COLING 2022 • Xiao Ding, Bowen Chen, Li Du, Bing Qin, Ting Liu
To fill the gap, we propose CogBERT, a framework that can induce fine-grained cognitive features from cognitive data and incorporate cognitive features into BERT by adaptively adjusting the weight of cognitive features for different NLP tasks.
no code implementations • COLING 2022 • Jianhua Yuan, Yanyan Zhao, Yanyue Lu, Bing Qin
Motivated by how humans tackle stance detection tasks, we propose to incorporate the stance reasoning process as task knowledge to assist in learning genuine features and reducing reliance on bias features.
no code implementations • Findings (EMNLP) 2021 • Xin Lu, Yijian Tian, Yanyan Zhao, Bing Qin
To address this problem, we propose a simple and effective Retrieve-Discriminate-Rewrite framework.
1 code implementation • dialdoc (ACL) 2022 • Xiachong Feng, Xiaocheng Feng, Bing Qin
Dialogue summarization helps users capture salient information from various types of dialogues has received much attention recently.
no code implementations • CCL 2022 • Bin Liang, Zijie Lin, Bing Qin, Ruifeng Xu
“现有的文本讽刺识别研究通常只停留在句子级别的讽刺表达分类, 缺乏考虑讽刺对象对讽刺表达的影响。针对这一问题, 本文提出一个新的面向话题的讽刺识别任务。该任务通过话题的引入, 以话题作为讽刺对象, 有助于更好地理解和建模讽刺表达。对应地, 本文构建了一个新的面向话题的讽刺识别数据集。这个数据集包含了707个话题, 以及对应的4871个话题-评论对组。在此基础上, 基于提示学习和大规模预训练语言模型, 提出了一种面向话题的讽刺表达提示学习模型。在本文构建的面向话题讽刺识别数据集上的实验结果表明, 相比基线模型, 本文所提出的面向话题的讽刺表达提示学习模型取得了更优的性能。同时, 实验分析也表明本文提出的面向话题的讽刺识别任务相比传统的句子级讽刺识别任务更具挑战性。”
no code implementations • Findings (ACL) 2022 • Yuxuan Gu, Xiaocheng Feng, Sicheng Ma, Jiaming Wu, Heng Gong, Bing Qin
Weighted decoding methods composed of the pretrained language model (LM) and the controller have achieved promising results for controllable text generation.
1 code implementation • EMNLP 2021 • Jihao Shi, Xiao Ding, Li Du, Ting Liu, Bing Qin
Many open-domain question answering problems can be cast as a textual entailment task, where a question and candidate answers are concatenated to form hypotheses.
1 code implementation • EMNLP 2021 • Xinwei Geng, Xiaocheng Feng, Bing Qin
Towards keeping the consistency of data distribution with iterative decoding, an iterative training strategy is employed to further improve the capacity of rewriting.
1 code implementation • Findings (EMNLP) 2021 • Haichao Zhu, Zekun Wang, Heng Zhang, Ming Liu, Sendong Zhao, Bing Qin
Then, we only fine-tune the lottery subnetwork, a small fraction of the whole parameters, on the annotated target domain data for adaptation.
no code implementations • 11 Sep 2023 • Yuhan Chen, Nuwa Xi, Yanrui Du, Haochun Wang, Chen Jianyu, Sendong Zhao, Bing Qin
Molecule discovery serves as a cornerstone in numerous scientific domains, fueling the development of new materials and innovative drug designs.
no code implementations • 8 Sep 2023 • Yanrui Du, Sendong Zhao, Yuhan Chen, Rai Bai, Jing Liu, Hua Wu, Haifeng Wang, Bing Qin
To address this issue, it is crucial to analyze and mitigate the influence of superficial clues on STM models.
1 code implementation • 8 Sep 2023 • Haochun Wang, Sendong Zhao, Zewen Qiang, Zijian Li, Nuwa Xi, Yanrui Du, MuZhen Cai, Haoqiang Guo, Yuhan Chen, Haoming Xu, Bing Qin, Ting Liu
To address this challenge, we propose knowledge-tuning, which leverages structured medical knowledge bases for the LLMs to grasp domain knowledge efficiently and facilitate reliable response generation.
no code implementations • 8 Sep 2023 • Haochun Wang, Sendong Zhao, Chi Liu, Nuwa Xi, MuZhen Cai, Bing Qin, Ting Liu
Experimental results indicate that even without tuning any parameters, our LLE-INC is on par with automated verbalizers with parameter tuning.
1 code implementation • 8 Sep 2023 • Yanrui Du, Sendong Zhao, MuZhen Cai, Jianyu Chen, Haochun Wang, Yuhan Chen, Haoqiang Guo, Bing Qin
Recent studies have focused on constructing Instruction Fine-Tuning (IFT) data through medical knowledge graphs to enrich the interactive medical knowledge of LLMs.
no code implementations • 7 Aug 2023 • Xiachong Feng, Xiaocheng Feng, Xiyuan Du, Min-Yen Kan, Bing Qin
However, existing work has focused on training models on centralized data, neglecting real-world scenarios where meeting data are infeasible to collect centrally, due to their sensitive nature.
no code implementations • 6 Jul 2023 • Nuwa Xi, Sendong Zhao, Haochun Wang, Chi Liu, Bing Qin, Ting Liu
In this paper, we propose fMRI2text, the first openvocabulary task aiming to bridge fMRI time series and human language.
no code implementations • 29 Jun 2023 • Tao He, Ming Liu, Yixin Cao, Zekun Wang, Zihao Zheng, Zheng Chu, Bing Qin
The proposed approach comprises two main components: a GNN-based predictor and a reasoning path distiller.
no code implementations • 28 Jun 2023 • Zhangyin Feng, Yong Dai, Fan Zhang, Duyu Tang, Xiaocheng Feng, Shuangzhi Wu, Bing Qin, Yunbo Cao, Shuming Shi
Traditional multitask learning methods basically can only exploit common knowledge in task- or language-wise, which lose either cross-language or cross-task knowledge.
1 code implementation • 27 May 2023 • Fangqi Zhu, Lin Zhang, Jun Gao, Bing Qin, Ruifeng Xu, Haiqin Yang
Event skeleton generation, aiming to induce an event schema skeleton graph with abstracted event nodes and their temporal relations from a set of event instance graphs, is a critical step in the temporal complex event schema induction task.
no code implementations • 26 May 2023 • Zhangyin Feng, Yuchen Ren, Xinmiao Yu, Xiaocheng Feng, Duyu Tang, Shuming Shi, Bing Qin
Diffusion models developed on top of powerful text-to-image generation models like Stable Diffusion achieve remarkable success in visual story generation.
1 code implementation • 25 May 2023 • Yichong Huang, Xiaocheng Feng, Xinwei Geng, Baohang Li, Bing Qin
Multilingual neural machine translation has witnessed remarkable progress in recent years.
no code implementations • 24 May 2023 • Zekun Wang, Jingchang Chen, Wangchunshu Zhou, Ming Liu, Bing Qin
Experimental results demonstrate that SmartTrim significantly reduces the computation overhead (2-3 times) of various VLMs with comparable performance (only a 1-2% degradation) on various vision-language tasks.
no code implementations • 23 May 2023 • Hao Yang, Can Gao, Hao Líu, Xinyan Xiao, Yanyan Zhao, Bing Qin
The experimental results show that our model achieves state-of-the-art performance in various downstream tasks, and through ablation study can prove that effective cross-layer learning improves the model's ability of multimodal representation.
no code implementations • 19 May 2023 • Kai Xiong, Xiao Ding, Yixin Cao, Ting Liu, Bing Qin
Large Language Models (LLMs) have demonstrated human-like intelligence and are widely used in various applications.
1 code implementation • 18 May 2023 • Tingting Wu, Xiao Ding, Minji Tang, Hao Zhang, Bing Qin, Ting Liu
To mitigate the effects of label noise, learning with noisy labels (LNL) methods are designed to achieve better generalization performance.
1 code implementation • 12 May 2023 • Jinglong Gao, Xiao Ding, Bing Qin, Ting Liu
Causal reasoning ability is crucial for numerous NLP applications.
no code implementations • 8 May 2023 • Yang Wu, Yanyan Zhao, Zhongyang Li, Bing Qin, Kai Xiong
Instruction tuning has been shown to be able to improve cross-task generalization of language models.
1 code implementation • 5 May 2023 • Weixiang Zhao, Yanyan Zhao, Shilong Wang, Bing Qin
Specifically, we construct the state transition graph with a two-step way, named transit-then-interact, to grasp such three types of turn-level transition information.
no code implementations • 2 May 2023 • Xiachong Feng, Xiaocheng Feng, Bing Qin
Generative agents that simulate human society show tremendous potential for further research and practical applications.
no code implementations • 19 Apr 2023 • Weixiang Zhao, Yanyan Zhao, Xin Lu, Shilong Wang, Yanpeng Tong, Bing Qin
This report presents a study on the emotional dialogue capability of ChatGPT, an advanced language model developed by OpenAI.
1 code implementation • 14 Apr 2023 • Haochun Wang, Chi Liu, Nuwa Xi, Zewen Qiang, Sendong Zhao, Bing Qin, Ting Liu
Large Language Models (LLMs), such as the LLaMA model, have demonstrated their effectiveness in various general-domain natural language processing (NLP) tasks.
no code implementations • 12 Apr 2023 • Chi Liu, Haochun Wang, Nuwa Xi, Sendong Zhao, Bing Qin
As a novel approach to tuning pre-trained models, prompt tuning involves freezing the parameters in downstream tasks while inserting trainable embeddings into inputs in the first layer.
no code implementations • 7 Apr 2023 • Kun Zhu, Xiaocheng Feng, Xiachong Feng, Yingsheng Wu, Bing Qin
To alleviate this problem, we present an atomic and challenging task named Hierarchical Catalogue Generation for Literature Review (HiCatGLR), which aims to generate a hierarchical catalogue for a review paper given various references.
no code implementations • 20 Feb 2023 • Weihong Zhong, Mao Zheng, Duyu Tang, Xuan Luo, Heng Gong, Xiaocheng Feng, Bing Qin
Although large-scale video-language pre-training models, which usually build a global alignment between the video and the text, have achieved remarkable progress on various downstream tasks, the idea of adopting fine-grained information during the pre-training stage is not well explored.
no code implementations • 23 Jan 2023 • Xiachong Feng, Xiaocheng Feng, Bing Qin
To mitigate this challenge, we devise a Curriculum Semantic-aware Contrastive Learning strategy (C-SCL), which effectively re-calibrates the subject-dependent EEG representation to the semantic-dependent EEG representation, thus reducing the discrepancy.
no code implementations • 20 Dec 2022 • Jianhua Yuan, Yanyan Zhao, Bing Qin
Stance detection models may tend to rely on dataset bias in the text part as a shortcut and thus fail to sufficiently learn the interaction between the targets and texts.
1 code implementation • 16 Dec 2022 • Kai Xiong, Xiao Ding, Zhongyang Li, Li Du, Bing Qin, Yi Zheng, Baoxing Huai
Causal chain reasoning (CCR) is an essential ability for many decision-making AI systems, which requires the model to build reliable causal chains by connecting causal pairs.
1 code implementation • 16 Dec 2022 • Yuxuan Gu, Xiaocheng Feng, Sicheng Ma, Lingyuan Zhang, Heng Gong, Weihong Zhong, Bing Qin
Previous work on controllable text generation has explored the idea of control from the latent space, such as optimizing a representation with attribute-related classifiers or sampling a representation from relevant discrete samples.
1 code implementation • 6 Dec 2022 • Weixiang Zhao, Yanyan Zhao, Zhuojun Li, Bing Qin
Moreover, social-interaction CSK serves as emotion-level bridge (E-bridge) and action-level bridge (A-bridge) to connect candidate utterances with the target one, which provides explicit causal clues for the Emotional Interaction module and Actional Interaction module to reason the target emotion.
Ranked #4 on
Causal Emotion Entailment
on RECCON
no code implementations • 7 Nov 2022 • Ming Liu, Yaojia LV, Jingrun Zhang, Ruiji Fu, Bing Qin
One is that it supports querying any Chinese named entity and browsing the extracted hypernym-hyponym paths surro-unding the query entity.
1 code implementation • 28 Oct 2022 • Haojie Pan, Zepeng Zhai, Yuzhou Zhang, Ruiji Fu, Ming Liu, Yangqiu Song, Zhongyuan Wang, Bing Qin
In this paper, we propose Kuaipedia, a large-scale multi-modal encyclopedia consisting of items, aspects, and short videos lined to them, which was extracted from billions of videos of Kuaishou (Kwai), a well-known short-video platform in China.
1 code implementation • 8 Oct 2022 • Weixiang Zhao, Yanyan Zhao, Xin Lu, Bing Qin
As a critical step to achieve human-like chatbots, empathetic response generation has attained increasing interests.
1 code implementation • 6 Oct 2022 • Yuxuan Gu, Xiaocheng Feng, Sicheng Ma, Lingyuan Zhang, Heng Gong, Bing Qin
Multi-aspect controllable text generation is a more challenging and practical task than single-aspect control.
no code implementations • 20 Sep 2022 • Yang Wu, Pai Peng, Zhenyu Zhang, Yanyan Zhao, Bing Qin
At the low-level, we propose the progressive tri-modal attention, which can model the tri-modal feature interactions by adopting a two-pass strategy and can further leverage such interactions to significantly reduce the computation and memory complexity through reducing the input token length.
1 code implementation • COLING 2022 • Haochun Wang, Chi Liu, Nuwa Xi, Sendong Zhao, Meizhi Ju, Shiwei Zhang, Ziheng Zhang, Yefeng Zheng, Bing Qin, Ting Liu
Prompt-based fine-tuning for pre-trained models has proven effective for many natural language processing tasks under few-shot settings in general domain.
no code implementations • 6 Sep 2022 • Pengfei Deng, Jianhua Yuan, Yanyan Zhao, Bing Qin
Our key intuition is that the sentiment representation of a document is composed of the sentiment representations of all the aspects of that document.
no code implementations • 21 Aug 2022 • Tingting Wu, Xiao Ding, Hao Zhang, Jinglong Gao, Li Du, Bing Qin, Ting Liu
To relieve this issue, curriculum learning is proposed to improve model performance and generalization by ordering training samples in a meaningful (e. g., easy to hard) sequence.
1 code implementation • 26 Jul 2022 • Zhenran Xu, Zifei Shan, Yuxin Li, Baotian Hu, Bing Qin
We then establish a strong baseline that scores a R@1 of 46. 2% on Few-Shot and 76. 6% on Zero-Shot on our dataset.
no code implementations • 4 Jul 2022 • Tao He, Ming Liu, Yixin Cao, Tianwen Jiang, Zihao Zheng, Jingrun Zhang, Sendong Zhao, Bing Qin
In this paper, we solve the sparse KGC from these two motivations simultaneously and handle their respective drawbacks further, and propose a plug-and-play unified framework VEM$^2$L over sparse KGs.
no code implementations • 28 Jun 2022 • Hao Yang, Yanyan Zhao, Jianwei Liu, Yang Wu, Bing Qin
In this paper, we propose a new dataset, the Multimodal Aspect-Category Sentiment Analysis (MACSA) dataset, which contains more than 21K text-image pairs.
1 code implementation • 25 May 2022 • Yanrui Du, Jing Yan, Yan Chen, Jing Liu, Sendong Zhao, Qiaoqiao She, Hua Wu, Haifeng Wang, Bing Qin
In this study, we focus on the spurious correlation between word features and labels that models learn from the biased data distribution of training data.
no code implementations • Findings (ACL) 2022 • Li Du, Xiao Ding, Yue Zhang, Kai Xiong, Ting Liu, Bing Qin
To this end, we incorporate an additional structured variable into BERT to learn to predict the event connections in the training process.
1 code implementation • ACL 2022 • Li Du, Xiao Ding, Kai Xiong, Ting Liu, Bing Qin
Understanding causality has vital importance for various Natural Language Processing (NLP) applications.
1 code implementation • 3 May 2022 • Yichong Huang, Xiaocheng Feng, Xinwei Geng, Bing Qin
In this paper, we propose a novel training strategy named LSSD (Language-Specific Self-Distillation), which can alleviate the convergence inconsistency and help MNMT models achieve the best performance on each language pair simultaneously.
1 code implementation • Findings (ACL) 2022 • Yang Wu, Yanyan Zhao, Hao Yang, Song Chen, Bing Qin, Xiaohuan Cao, Wenting Zhao
Through further analysis of the ASR outputs, we find that in some cases the sentiment words, the key sentiment elements in the textual modality, are recognized as other words, which makes the sentiment of the text change and hurts the performance of multimodal sentiment models directly.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+3
no code implementations • 24 Feb 2022 • Zhangyin Feng, Duyu Tang, Cong Zhou, Junwei Liao, Shuangzhi Wu, Xiaocheng Feng, Bing Qin, Yunbo Cao, Shuming Shi
(2) how to predict a word via cloze test without knowing the number of wordpieces in advance?
2 code implementations • 16 Dec 2021 • Zekun Wang, Wenhui Wang, Haichao Zhu, Ming Liu, Bing Qin, Furu Wei
We propose a cross-modal attention distillation framework to train a dual-encoder model for vision-language understanding tasks, such as visual reasoning and visual question answering.
no code implementations • 20 Aug 2021 • Yibo Sun, Jizhou Huang, Chunyuan Yuan, Miao Fan, Haifeng Wang, Ming Liu, Bing Qin
We approach this task as a sequence tagging problem, where the goal is to produce <POI name, accessibility label> pairs from unstructured text.
1 code implementation • ACL 2021 • Li Du, Xiao Ding, Ting Liu, Bing Qin
Abductive reasoning aims at inferring the most plausible explanation for observed events, which would play critical roles in various NLP applications, such as reading comprehension and question answering.
1 code implementation • ACL 2021 • Li Du, Xiao Ding, Kai Xiong, Ting Liu, Bing Qin
ExCAR first acquires additional evidence information from a large-scale causal event graph as logical rules for causal reasoning.
no code implementations • 21 Jul 2021 • Zhongyang Li, Xiao Ding, Kuo Liao, Bing Qin, Ting Liu
Recent work has shown success in incorporating pre-trained models like BERT to improve NLP systems.
no code implementations • 7 Jul 2021 • Xiachong Feng, Xiaocheng Feng, Bing Qin
We hope that this first survey of dialogue summarization can provide the community with a quick access and a general picture to this task and motivate future researches.
1 code implementation • ACL 2021 • Xiachong Feng, Xiaocheng Feng, Libo Qin, Bing Qin, Ting Liu
Current dialogue summarization systems usually encode the text with a number of general semantic features (e. g., keywords and topics) to gain more powerful dialogue modeling capabilities.
1 code implementation • 30 Apr 2021 • Yichong Huang, Xiachong Feng, Xiaocheng Feng, Bing Qin
Recently, various neural encoder-decoder models pioneered by Seq2Seq framework have been proposed to achieve the goal of generating more abstractive summaries by learning to map input text to output text.
no code implementations • 26 Apr 2021 • Jiaqi Li, Ming Liu, Zihao Zheng, Heng Zhang, Bing Qin, Min-Yen Kan, Ting Liu
Multiparty Dialogue Machine Reading Comprehension (MRC) differs from traditional MRC as models must handle the complex dialogue discourse structure, previously unconsidered in traditional MRC.
Ranked #4 on
Question Answering
on Molweni
no code implementations • 17 Apr 2021 • Jianhua Yuan, Yanyan Zhao, Bing Qin, Ting Liu
To this end, we propose the BertMasker network which explicitly masks domain-related words from texts, learns domain-invariant sentiment features from these domain-agnostic texts, and uses those masked words to form domain-aware sentence representations.
General Classification
Multi-Domain Sentiment Classification
+2
1 code implementation • 7 Dec 2020 • Xiachong Feng, Xiaocheng Feng, Bing Qin, Xinwei Geng
First, we present a Dialogue Discourse-Dware Meeting Summarizer (DDAMS) to explicitly model the interaction between utterances in a meeting by modeling different discourse relations.
no code implementations • 2 Dec 2020 • Sendong Zhao, Bing Qin, Ting Liu, Fei Wang
This paper proposes a method BioGRER to improve the BioKG's quality, which comprehensively combines the knowledge graph embedding and logic rules that support and negate triplets in the BioKG.
no code implementations • SEMEVAL 2020 • Xiao Ding, Dingkui Hao, Yuewei Zhang, Kuo Liao, Zhongyang Li, Bing Qin, Ting Liu
In this task, we dedicate to detecting causation, especially counterfactuals from texts.
1 code implementation • COLING 2020 • Heng Gong, Yawei Sun, Xiaocheng Feng, Bing Qin, Wei Bi, Xiaojiang Liu, Ting Liu
Although neural table-to-text models have achieved remarkable progress with the help of large-scale datasets, they suffer insufficient learning problem with limited training data.
no code implementations • COLING 2020 • Xin Lu, Yanyan Zhao, Yang Wu, Yijian Tian, Huipeng Chen, Bing Qin
We noticed that the gold emotion labels of the context utterances can provide explicit and accurate emotion interaction, but it is impossible to input gold labels at inference time.
Ranked #32 on
Emotion Recognition in Conversation
on IEMOCAP
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Heng Gong, Wei Bi, Xiaocheng Feng, Bing Qin, Xiaojiang Liu, Ting Liu
Neural table-to-text models, which select and order salient data, as well as verbalizing them fluently via surface realization, have achieved promising progress.
1 code implementation • CCL 2021 • Xiachong Feng, Xiaocheng Feng, Bing Qin, Ting Liu
In detail, we consider utterance and commonsense knowledge as two different types of data and design a Dialogue Heterogeneous Graph Network (D-HGN) for modeling both information.
no code implementations • 17 Jun 2020 • Tianwen Jiang, Tong Zhao, Bing Qin, Ting Liu, Nitesh V. Chawla, Meng Jiang
Noun phrases and relational phrases in Open Knowledge Bases are often not canonical, leading to redundant and ambiguous facts.
1 code implementation • ACL 2020 • Xinwei Geng, Long-Yue Wang, Xing Wang, Bing Qin, Ting Liu, Zhaopeng Tu
Self-attention networks (SANs) with selective mechanism has produced substantial improvements in various NLP tasks by concentrating on a subset of input words.
6 code implementations • Findings of the Association for Computational Linguistics 2020 • Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, Guoping Hu
Bidirectional Encoder Representations from Transformers (BERT) has shown marvelous improvements across various NLP tasks, and consecutive variants have been proposed to further improve the performance of the pre-trained language models.
1 code implementation • COLING 2020 • Jiaqi Li, Ming Liu, Min-Yen Kan, Zihao Zheng, Zekun Wang, Wenqiang Lei, Ting Liu, Bing Qin
Research into the area of multiparty dialog has grown considerably over recent years.
Ranked #7 on
Discourse Parsing
on Molweni
1 code implementation • 24 Feb 2020 • Xiaocheng Feng, Yawei Sun, Bing Qin, Heng Gong, Yibo Sun, Wei Bi, Xiaojiang Liu, Ting Liu
In this paper, we focus on a new practical task, document-scale text content manipulation, which is the opposite of text style transfer and aims to preserve text styles while altering the content.
8 code implementations • Findings of the Association for Computational Linguistics 2020 • Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, Ming Zhou
Results show that CodeBERT achieves state-of-the-art performance on both natural language code search and code documentation generation tasks.
Ranked #1 on
Code Documentation Generation
on CodeSearchNet - Go
no code implementations • 8 Nov 2019 • Jiaqi Li, Ming Liu, Bing Qin, Zihao Zheng, Ting Liu
In this paper, we propose the scheme for annotating large-scale multi-party chat dialogues for discourse parsing and machine comprehension.
no code implementations • 8 Nov 2019 • Haichao Zhu, Li Dong, Furu Wei, Bing Qin, Ting Liu
The limited size of existing query-focused summarization datasets renders training data-driven summarization models challenging.
no code implementations • IJCNLP 2019 • Tianwen Jiang, Tong Zhao, Bing Qin, Ting Liu, Nitesh Chawla, Meng Jiang
In this work, we propose a new sequence labeling framework (as well as a new tag schema) to jointly extract the fact and condition tuples from statement sentences.
no code implementations • IJCNLP 2019 • Shuang Chen, Jinpeng Wang, Xiaocheng Feng, Feng Jiang, Bing Qin, Chin-Yew Lin
Recent neural models for data-to-text generation rely on massive parallel pairs of data and text to learn the writing knowledge.
no code implementations • 12 Sep 2019 • Yibo Sun, Duyu Tang, Nan Duan, Yeyun Gong, Xiaocheng Feng, Bing Qin, Daxin Jiang
Neural semantic parsing has achieved impressive results in recent years, yet its success relies on the availability of large amounts of supervised data.
1 code implementation • IJCNLP 2019 • Heng Gong, Xiaocheng Feng, Bing Qin, Ting Liu
To address aforementioned problems, not only do we model each table cell considering other records in the same row, we also enrich table's representation by modeling each table cell in context of other cells in the same column or with historical (time dimension) data respectively.
1 code implementation • IJCNLP 2019 • Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, Guoping Hu
In this paper, we propose Cross-Lingual Machine Reading Comprehension (CLMRC) task for the languages other than English.
no code implementations • 26 Jun 2019 • Tianwen Jiang, Tong Zhao, Bing Qin, Ting Liu, Nitesh V. Chawla, Meng Jiang
Conditions are essential in the statements of biological literature.
2 code implementations • 19 Jun 2019 • Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang
To demonstrate the effectiveness of these models, we create a series of Chinese pre-trained language models as our baselines, including BERT, RoBERTa, ELECTRA, RBT, etc.
no code implementations • ACL 2019 • Haichao Zhu, Li Dong, Furu Wei, Wenhui Wang, Bing Qin, Ting Liu
We also present a way to construct training data for our question generation models by leveraging the existing reading comprehension dataset.
no code implementations • 8 Mar 2019 • Tianwen Jiang, Sendong Zhao, Jing Liu, Jin-Ge Yao, Ming Liu, Bing Qin, Ting Liu, Chin-Yew Lin
Time-DS is composed of a time series instance-popularity and two strategies.
no code implementations • 8 Mar 2019 • Tianwen Jiang, Ming Liu, Bing Qin, Ting Liu
This paper investigates an attention-based automatic paradigm called TransATT for attribute acquisition, by learning the representation of hierarchical classes and attributes in Chinese ontology.
no code implementations • 26 Dec 2018 • Xinwei Geng, Long-Yue Wang, Xing Wang, Bing Qin, Ting Liu, Zhaopeng Tu
Neural machine translation (NMT) models generally adopt an encoder-decoder architecture for modeling the entire translation process.
1 code implementation • EMNLP 2018 • Yijia Liu, Wanxiang Che, Bo Zheng, Bing Qin, Ting Liu
In this paper, we propose a new rich resource enhanced AMR aligner which produces multiple alignments and a new transition system for AMR parsing along with its oracle parser.
Ranked #2 on
AMR Parsing
on LDC2014T12:
no code implementations • EMNLP 2018 • Xinwei Geng, Xiaocheng Feng, Bing Qin, Ting Liu
Although end-to-end neural machine translation (NMT) has achieved remarkable progress in the recent years, the idea of adopting multi-pass decoding mechanism into conventional NMT is not well explored.
no code implementations • 12 Sep 2018 • Yibo Sun, Duyu Tang, Nan Duan, Jingjing Xu, Xiaocheng Feng, Bing Qin
Results show that our knowledge-aware model outperforms the state-of-the-art approaches.
no code implementations • 12 Sep 2018 • Yibo Sun, Daya Guo, Duyu Tang, Nan Duan, Zhao Yan, Xiaocheng Feng, Bing Qin
Machine reading comprehension (MRC) requires reasoning about both the knowledge involved in a document and knowledge about the world.
1 code implementation • ACL 2018 • Yijia Liu, Wanxiang Che, Huaipeng Zhao, Bing Qin, Ting Liu
Many natural language processing tasks can be modeled into structured prediction and solved as a search problem.
1 code implementation • NAACL 2018 • Yijia Liu, Yi Zhu, Wanxiang Che, Bing Qin, Nathan Schneider, Noah A. Smith
Nonetheless, using the new treebank, we build a pipeline system to parse raw tweets into UD.
Ranked #2 on
Dependency Parsing
on Tweebank
no code implementations • ACL 2018 • Yibo Sun, Duyu Tang, Nan Duan, Jianshu ji, Guihong Cao, Xiaocheng Feng, Bing Qin, Ting Liu, Ming Zhou
We present a generative model to map natural language questions into SQL queries.
Ranked #4 on
Code Generation
on WikiSQL
no code implementations • COLING 2016 • Xiaocheng Feng, Duyu Tang, Bing Qin, Ting Liu
Knowledge base (KB) such as Freebase plays an important role for many natural language processing tasks.
no code implementations • 7 Nov 2016 • Luyang Li, Bing Qin, Wenjing Ren, Ting Liu
We use feedforward memory network and feedback memory network to learn the representation of the credibility of statements which are about the same object.
no code implementations • SEMEVAL 2016 • Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Ion Androutsopoulos, Man, Suresh har, Mohammad AL-Smadi, Mahmoud Al-Ayyoub, Yanyan Zhao, Bing Qin, Orph{\'e}e De Clercq, V{\'e}ronique Hoste, Marianna Apidianaki, Xavier Tannier, Natalia Loukachevitch, Evgeniy Kotelnikov, Nuria Bel, Salud Mar{\'\i}a Jim{\'e}nez-Zafra, G{\"u}l{\c{s}}en Eryi{\u{g}}it
Aspect-Based Sentiment Analysis (ABSA)
Coreference Resolution
+1
8 code implementations • EMNLP 2016 • Duyu Tang, Bing Qin, Ting Liu
Such importance degree and text representation are calculated with multiple computational layers, each of which is a neural attention model over an external memory.
Aspect-Based Sentiment Analysis (ABSA)
General Classification
+1
1 code implementation • 19 Apr 2016 • Yijia Liu, Wanxiang Che, Jiang Guo, Bing Qin, Ting Liu
Many natural language processing (NLP) tasks can be generalized into segmentation problem.
no code implementations • 18 Dec 2015 • Bing Qin, Duyu Tang, Xinwei Geng, Dandan Ning, Jiahao Liu, Ting Liu
Generating an article automatically with computer program is a challenging task in artificial intelligence and natural language processing.
10 code implementations • COLING 2016 • Duyu Tang, Bing Qin, Xiaocheng Feng, Ting Liu
Target-dependent sentiment classification remains a challenge: modeling the semantic relatedness of a target with its context words in a sentence.
Aspect-Based Sentiment Analysis (ABSA)
General Classification
+1
no code implementations • 28 Mar 2014 • Duyu Tang, Bing Qin, Ting Liu, Qiuhui Shi
In order to analyze the emotional changes in accordance with time and space, this paper presents an Emotion Analysis Platform (EAP), which explores the emotional distribution of each province, so that can monitor the global pulse of each province in China.