Search Results for author: Bing Qin

Found 155 papers, 64 papers with code

Pre-Training with Whole Word Masking for Chinese BERT

2 code implementations19 Jun 2019 Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang

To demonstrate the effectiveness of these models, we create a series of Chinese pre-trained language models as our baselines, including BERT, RoBERTa, ELECTRA, RBT, etc.

Document Classification General Classification +5

Revisiting Pre-Trained Models for Chinese Natural Language Processing

6 code implementations Findings of the Association for Computational Linguistics 2020 Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, Guoping Hu

Bidirectional Encoder Representations from Transformers (BERT) has shown marvelous improvements across various NLP tasks, and consecutive variants have been proposed to further improve the performance of the pre-trained language models.

Language Modelling Stock Market Prediction

HuaTuo: Tuning LLaMA Model with Chinese Medical Knowledge

1 code implementation14 Apr 2023 Haochun Wang, Chi Liu, Nuwa Xi, Zewen Qiang, Sendong Zhao, Bing Qin, Ting Liu

Large Language Models (LLMs), such as the LLaMA model, have demonstrated their effectiveness in various general-domain natural language processing (NLP) tasks.

Knowledge-tuning Large Language Models with Structured Medical Knowledge Bases for Reliable Response Generation in Chinese

1 code implementation8 Sep 2023 Haochun Wang, Sendong Zhao, Zewen Qiang, Zijian Li, Nuwa Xi, Yanrui Du, MuZhen Cai, Haoqiang Guo, Yuhan Chen, Haoming Xu, Bing Qin, Ting Liu

To address this challenge, we propose knowledge-tuning, which leverages structured medical knowledge bases for the LLMs to grasp domain knowledge efficiently and facilitate reliable response generation.

Domain Adaptation Hallucination +2

Don't Ignore Dual Logic Ability of LLMs while Privatizing: A Data-Intensive Analysis in Medical Domain

1 code implementation8 Sep 2023 Yanrui Du, Sendong Zhao, MuZhen Cai, Ming Ma, Danyang Zhao, Jiawei Cao, Bing Qin

We conduct several experiments to analyze the dual logic ability of LLMs by examining the consistency of the stance in responses to paired questions about the same fact.

Fact Checking Knowledge Graphs

Aspect Level Sentiment Classification with Deep Memory Network

8 code implementations EMNLP 2016 Duyu Tang, Bing Qin, Ting Liu

Such importance degree and text representation are calculated with multiple computational layers, each of which is a neural attention model over an external memory.

Aspect-Based Sentiment Analysis (ABSA) General Classification +1

Retrieval-Generation Synergy Augmented Large Language Models

1 code implementation8 Oct 2023 Zhangyin Feng, Xiaocheng Feng, Dezhi Zhao, Maojin Yang, Bing Qin

Large language models augmented with task-relevant documents have demonstrated impressive performance on knowledge-intensive tasks.

Question Answering Retrieval

KwaiAgents: Generalized Information-seeking Agent System with Large Language Models

1 code implementation8 Dec 2023 Haojie Pan, Zepeng Zhai, Hao Yuan, Yaojia LV, Ruiji Fu, Ming Liu, Zhongyuan Wang, Bing Qin

Driven by curiosity, humans have continually sought to explore and understand the world around them, leading to the invention of various tools to satiate this inquisitiveness.

A Survey of Chain of Thought Reasoning: Advances, Frontiers and Future

1 code implementation27 Sep 2023 Zheng Chu, Jingchang Chen, Qianglong Chen, Weijiang Yu, Tao He, Haotian Wang, Weihua Peng, Ming Liu, Bing Qin, Ting Liu

Chain-of-thought reasoning, a cognitive process fundamental to human intelligence, has garnered significant attention in the realm of artificial intelligence and natural language processing.

A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions

1 code implementation9 Nov 2023 Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, Ting Liu

The emergence of large language models (LLMs) has marked a significant breakthrough in natural language processing (NLP), leading to remarkable advancements in text understanding and generation.

Hallucination

Less Learn Shortcut: Analyzing and Mitigating Learning of Spurious Feature-Label Correlation

1 code implementation25 May 2022 Yanrui Du, Jing Yan, Yan Chen, Jing Liu, Sendong Zhao, Qiaoqiao She, Hua Wu, Haifeng Wang, Bing Qin

In this study, we focus on the spurious correlation between word features and labels that models learn from the biased data distribution of training data.

Natural Language Inference Sentiment Analysis

The Factual Inconsistency Problem in Abstractive Text Summarization: A Survey

1 code implementation30 Apr 2021 Yichong Huang, Xiachong Feng, Xiaocheng Feng, Bing Qin

Recently, various neural encoder-decoder models pioneered by Seq2Seq framework have been proposed to achieve the goal of generating more abstractive summaries by learning to map input text to output text.

Abstractive Text Summarization

Language Model as an Annotator: Exploring DialoGPT for Dialogue Summarization

1 code implementation ACL 2021 Xiachong Feng, Xiaocheng Feng, Libo Qin, Bing Qin, Ting Liu

Current dialogue summarization systems usually encode the text with a number of general semantic features (e. g., keywords and topics) to gain more powerful dialogue modeling capabilities.

Conversational Response Generation Language Modelling +1

TableGPT: Few-shot Table-to-Text Generation with Table Structure Reconstruction and Content Matching

1 code implementation COLING 2020 Heng Gong, Yawei Sun, Xiaocheng Feng, Bing Qin, Wei Bi, Xiaojiang Liu, Ting Liu

Although neural table-to-text models have achieved remarkable progress with the help of large-scale datasets, they suffer insufficient learning problem with limited training data.

Few-Shot Learning Language Modelling +2

Dialogue Discourse-Aware Graph Model and Data Augmentation for Meeting Summarization

1 code implementation7 Dec 2020 Xiachong Feng, Xiaocheng Feng, Bing Qin, Xinwei Geng

First, we present a Dialogue Discourse-Dware Meeting Summarizer (DDAMS) to explicitly model the interaction between utterances in a meeting by modeling different discourse relations.

Data Augmentation Meeting Summarization

Kuaipedia: a Large-scale Multi-modal Short-video Encyclopedia

1 code implementation28 Oct 2022 Haojie Pan, Zepeng Zhai, Yuzhou Zhang, Ruiji Fu, Ming Liu, Yangqiu Song, Zhongyuan Wang, Bing Qin

In this paper, we propose Kuaipedia, a large-scale multi-modal encyclopedia consisting of items, aspects, and short videos lined to them, which was extracted from billions of videos of Kuaishou (Kwai), a well-known short-video platform in China.

Entity Linking Entity Typing

e-CARE: a New Dataset for Exploring Explainable Causal Reasoning

1 code implementation ACL 2022 Li Du, Xiao Ding, Kai Xiong, Ting Liu, Bing Qin

Understanding causality has vital importance for various Natural Language Processing (NLP) applications.

valid

A Distributional Lens for Multi-Aspect Controllable Text Generation

1 code implementation6 Oct 2022 Yuxuan Gu, Xiaocheng Feng, Sicheng Ma, Lingyuan Zhang, Heng Gong, Bing Qin

Multi-aspect controllable text generation is a more challenging and practical task than single-aspect control.

Attribute Text Generation

Controllable Text Generation via Probability Density Estimation in the Latent Space

1 code implementation16 Dec 2022 Yuxuan Gu, Xiaocheng Feng, Sicheng Ma, Lingyuan Zhang, Heng Gong, Weihong Zhong, Bing Qin

Previous work on controllable text generation has explored the idea of control from the latent space, such as optimizing a representation with attribute-related classifiers or sampling a representation from relevant discrete samples.

Attribute Density Estimation +1

Distilled Dual-Encoder Model for Vision-Language Understanding

2 code implementations16 Dec 2021 Zekun Wang, Wenhui Wang, Haichao Zhu, Ming Liu, Bing Qin, Furu Wei

We propose a cross-modal attention distillation framework to train a dual-encoder model for vision-language understanding tasks, such as visual reasoning and visual question answering.

Question Answering Visual Entailment +2

How Does Selective Mechanism Improve Self-Attention Networks?

1 code implementation ACL 2020 Xinwei Geng, Long-Yue Wang, Xing Wang, Bing Qin, Ting Liu, Zhaopeng Tu

Self-attention networks (SANs) with selective mechanism has produced substantial improvements in various NLP tasks by concentrating on a subset of input words.

Machine Translation Natural Language Inference +2

Incorporating Commonsense Knowledge into Abstractive Dialogue Summarization via Heterogeneous Graph Networks

1 code implementation CCL 2021 Xiachong Feng, Xiaocheng Feng, Bing Qin, Ting Liu

In detail, we consider utterance and commonsense knowledge as two different types of data and design a Dialogue Heterogeneous Graph Network (D-HGN) for modeling both information.

Abstractive Dialogue Summarization dialogue summary +1

Sentiment Word Aware Multimodal Refinement for Multimodal Sentiment Analysis with ASR Errors

1 code implementation Findings (ACL) 2022 Yang Wu, Yanyan Zhao, Hao Yang, Song Chen, Bing Qin, Xiaohuan Cao, Wenting Zhao

Through further analysis of the ASR outputs, we find that in some cases the sentiment words, the key sentiment elements in the textual modality, are recognized as other words, which makes the sentiment of the text change and hurts the performance of multimodal sentiment models directly.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +4

An AMR Aligner Tuned by Transition-based Parser

1 code implementation EMNLP 2018 Yijia Liu, Wanxiang Che, Bo Zheng, Bing Qin, Ting Liu

In this paper, we propose a new rich resource enhanced AMR aligner which produces multiple alignments and a new transition system for AMR parsing along with its oracle parser.

AMR Parsing POS +1

Learning to Rewrite for Non-Autoregressive Neural Machine Translation

1 code implementation EMNLP 2021 Xinwei Geng, Xiaocheng Feng, Bing Qin

Towards keeping the consistency of data distribution with iterative decoding, an iterative training strategy is employed to further improve the capacity of rewriting.

Machine Translation Translation

Table-to-Text Generation with Effective Hierarchical Encoder on Three Dimensions (Row, Column and Time)

1 code implementation IJCNLP 2019 Heng Gong, Xiaocheng Feng, Bing Qin, Ting Liu

To address aforementioned problems, not only do we model each table cell considering other records in the same row, we also enrich table's representation by modeling each table cell in context of other cells in the same column or with historical (time dimension) data respectively.

Table-to-Text Generation Time Series +1

TimeBench: A Comprehensive Evaluation of Temporal Reasoning Abilities in Large Language Models

1 code implementation29 Nov 2023 Zheng Chu, Jingchang Chen, Qianglong Chen, Weijiang Yu, Haotian Wang, Ming Liu, Bing Qin

Understanding time is a pivotal aspect of human cognition, crucial in the broader framework of grasping the intricacies of the world.

Hansel: A Chinese Few-Shot and Zero-Shot Entity Linking Benchmark

1 code implementation26 Jul 2022 Zhenran Xu, Zifei Shan, Yuxin Li, Baotian Hu, Bing Qin

We then establish a strong baseline that scores a R@1 of 46. 2% on Few-Shot and 76. 6% on Zero-Shot on our dataset.

Entity Linking

From Artificially Real to Real: Leveraging Pseudo Data from Large Language Models for Low-Resource Molecule Discovery

1 code implementation11 Sep 2023 Yuhan Chen, Nuwa Xi, Yanrui Du, Haochun Wang, Jianyu Chen, Sendong Zhao, Bing Qin

Furthermore, our method shows a sustained improvement as the volume of pseudo data increases, revealing the great potential of pseudo data in advancing low-resource cross-modal molecule discovery.

Descriptive Domain Adaptation +2

ReCo: Reliable Causal Chain Reasoning via Structural Causal Recurrent Neural Networks

1 code implementation16 Dec 2022 Kai Xiong, Xiao Ding, Zhongyang Li, Li Du, Bing Qin, Yi Zheng, Baoxing Huai

Causal chain reasoning (CCR) is an essential ability for many decision-making AI systems, which requires the model to build reliable causal chains by connecting causal pairs.

Decision Making

Learning Event Graph Knowledge for Abductive Reasoning

1 code implementation ACL 2021 Li Du, Xiao Ding, Ting Liu, Bing Qin

Abductive reasoning aims at inferring the most plausible explanation for observed events, which would play critical roles in various NLP applications, such as reading comprehension and question answering.

Question Answering Reading Comprehension

An Early Evaluation of GPT-4V(ision)

1 code implementation25 Oct 2023 Yang Wu, Shilong Wang, Hao Yang, Tian Zheng, Hongbo Zhang, Yanyan Zhao, Bing Qin

In this paper, we evaluate different abilities of GPT-4V including visual understanding, language understanding, visual puzzle solving, and understanding of other modalities such as depth, thermal, video, and audio.

Math

Unifying the Convergences in Multilingual Neural Machine Translation

1 code implementation3 May 2022 Yichong Huang, Xiaocheng Feng, Xinwei Geng, Bing Qin

In this paper, we propose a novel training strategy named LSSD (Language-Specific Self-Distillation), which can alleviate the convergence inconsistency and help MNMT models achieve the best performance on each language pair simultaneously.

Machine Translation NMT +1

Knowledge-Bridged Causal Interaction Network for Causal Emotion Entailment

1 code implementation6 Dec 2022 Weixiang Zhao, Yanyan Zhao, Zhuojun Li, Bing Qin

Moreover, social-interaction CSK serves as emotion-level bridge (E-bridge) and action-level bridge (A-bridge) to connect candidate utterances with the target one, which provides explicit causal clues for the Emotional Interaction module and Actional Interaction module to reason the target emotion.

Causal Emotion Entailment Graph Attention

ExCAR: Event Graph Knowledge Enhanced Explainable Causal Reasoning

1 code implementation ACL 2021 Li Du, Xiao Ding, Kai Xiong, Ting Liu, Bing Qin

ExCAR first acquires additional evidence information from a large-scale causal event graph as logical rules for causal reasoning.

Representation Learning

TransESC: Smoothing Emotional Support Conversation via Turn-Level State Transition

1 code implementation5 May 2023 Weixiang Zhao, Yanyan Zhao, Shilong Wang, Bing Qin

Specifically, we construct the state transition graph with a two-step way, named transit-then-interact, to grasp such three types of turn-level transition information.

MolTailor: Tailoring Chemical Molecular Representation to Specific Tasks via Text Prompts

1 code implementation21 Jan 2024 Haoqiang Guo, Sendong Zhao, Haochun Wang, Yanrui Du, Bing Qin

The agent accentuates task-relevant features in the molecular representation by understanding the natural language description of the task, just as a tailor customizes clothes for clients.

Drug Discovery Language Modelling +2

Learning to Select Bi-Aspect Information for Document-Scale Text Content Manipulation

1 code implementation24 Feb 2020 Xiaocheng Feng, Yawei Sun, Bing Qin, Heng Gong, Yibo Sun, Wei Bi, Xiaojiang Liu, Ting Liu

In this paper, we focus on a new practical task, document-scale text content manipulation, which is the opposite of text style transfer and aims to preserve text styles while altering the content.

Sentence Style Transfer +2

Prompt Combines Paraphrase: Teaching Pre-trained Models to Understand Rare Biomedical Words

1 code implementation COLING 2022 Haochun Wang, Chi Liu, Nuwa Xi, Sendong Zhao, Meizhi Ju, Shiwei Zhang, Ziheng Zhang, Yefeng Zheng, Bing Qin, Ting Liu

Prompt-based fine-tuning for pre-trained models has proven effective for many natural language processing tasks under few-shot settings in general domain.

Natural Language Inference

MuCDN: Mutual Conversational Detachment Network for Emotion Recognition in Multi-Party Conversations

1 code implementation COLING 2022 Weixiang Zhao, Yanyan Zhao, Bing Qin

Specifically, two detachment ways are devised to perform context and speaker-specific modeling within detached threads and they are bridged through a mutual module.

Emotion Recognition

MSAMSum: Towards Benchmarking Multi-lingual Dialogue Summarization

1 code implementation dialdoc (ACL) 2022 Xiachong Feng, Xiaocheng Feng, Bing Qin

Dialogue summarization helps users capture salient information from various types of dialogues has received much attention recently.

Benchmarking dialogue summary +1

CogBERT: Cognition-Guided Pre-trained Language Models

1 code implementation COLING 2022 Xiao Ding, Bowen Chen, Li Du, Bing Qin, Ting Liu

To fill the gap, we propose CogBERT, a framework that can induce fine-grained cognitive features from cognitive data and incorporate cognitive features into BERT by adaptively adjusting the weight of cognitive features for different NLP tasks.

EEG

A Diffusion Model for Event Skeleton Generation

1 code implementation27 May 2023 Fangqi Zhu, Lin Zhang, Jun Gao, Bing Qin, Ruifeng Xu, Haiqin Yang

Event skeleton generation, aiming to induce an event schema skeleton graph with abstracted event nodes and their temporal relations from a set of event instance graphs, is a critical step in the temporal complex event schema induction task.

Denoising Graph Generation

Don't Lose Yourself! Empathetic Response Generation via Explicit Self-Other Awareness

1 code implementation8 Oct 2022 Weixiang Zhao, Yanyan Zhao, Xin Lu, Bing Qin

As a critical step to achieve human-like chatbots, empathetic response generation has attained increasing interests.

Empathetic Response Generation Response Generation

NoisywikiHow: A Benchmark for Learning with Real-world Noisy Labels in Natural Language Processing

1 code implementation18 May 2023 Tingting Wu, Xiao Ding, Minji Tang, Hao Zhang, Bing Qin, Ting Liu

To mitigate the effects of label noise, learning with noisy labels (LNL) methods are designed to achieve better generalization performance.

Learning with noisy labels

Learning to Describe for Predicting Zero-shot Drug-Drug Interactions

1 code implementation13 Mar 2024 Fangqi Zhu, Yongqi Zhang, Lei Chen, Bing Qin, Ruifeng Xu

Adverse drug-drug interactions~(DDIs) can compromise the effectiveness of concurrent drug administration, posing a significant challenge in healthcare.

Language Modelling Reinforcement Learning (RL)

Less Is More: Domain Adaptation with Lottery Ticket for Reading Comprehension

1 code implementation Findings (EMNLP) 2021 Haichao Zhu, Zekun Wang, Heng Zhang, Ming Liu, Sendong Zhao, Bing Qin

Then, we only fine-tune the lottery subnetwork, a small fraction of the whole parameters, on the annotated target domain data for adaptation.

Domain Adaptation Reading Comprehension

Neural Natural Logic Inference for Interpretable Question Answering

1 code implementation EMNLP 2021 Jihao Shi, Xiao Ding, Li Du, Ting Liu, Bing Qin

Many open-domain question answering problems can be cast as a textual entailment task, where a question and candidate answers are concatenated to form hypotheses.

Multiple-choice Natural Language Inference +1

Manifold-based Verbalizer Space Re-embedding for Tuning-free Prompt-based Classification

1 code implementation8 Sep 2023 Haochun Wang, Sendong Zhao, Chi Liu, Nuwa Xi, MuZhen Cai, Bing Qin, Ting Liu

Experimental results indicate that even without tuning any parameters, our LLE-INC is on par with automated verbalizers with parameter tuning.

Examining Inter-Consistency of Large Language Models Collaboration: An In-depth Analysis via Debate

1 code implementation19 May 2023 Kai Xiong, Xiao Ding, Yixin Cao, Ting Liu, Bing Qin

Through extensive experiments on various datasets, LLMs can effectively collaborate to reach a consensus despite noticeable inter-inconsistencies, but imbalances in their abilities can lead to domination by superior LLMs.

Decision Making

Truth Discovery with Memory Network

no code implementations7 Nov 2016 Luyang Li, Bing Qin, Wenjing Ren, Ting Liu

We use feedforward memory network and feedback memory network to learn the representation of the credibility of statements which are about the same object.

A Planning based Framework for Essay Generation

no code implementations18 Dec 2015 Bing Qin, Duyu Tang, Xinwei Geng, Dandan Ning, Jiahao Liu, Ting Liu

Generating an article automatically with computer program is a challenging task in artificial intelligence and natural language processing.

Sentence

Emotion Analysis Platform on Chinese Microblog

no code implementations28 Mar 2014 Duyu Tang, Bing Qin, Ting Liu, Qiuhui Shi

In order to analyze the emotional changes in accordance with time and space, this paper presents an Emotion Analysis Platform (EAP), which explores the emotional distribution of each province, so that can monitor the global pulse of each province in China.

Emotion Recognition

Knowledge Based Machine Reading Comprehension

no code implementations12 Sep 2018 Yibo Sun, Daya Guo, Duyu Tang, Nan Duan, Zhao Yan, Xiaocheng Feng, Bing Qin

Machine reading comprehension (MRC) requires reasoning about both the knowledge involved in a document and knowledge about the world.

Machine Reading Comprehension Question Answering +2

Adaptive Multi-pass Decoder for Neural Machine Translation

no code implementations EMNLP 2018 Xinwei Geng, Xiaocheng Feng, Bing Qin, Ting Liu

Although end-to-end neural machine translation (NMT) has achieved remarkable progress in the recent years, the idea of adopting multi-pass decoding mechanism into conventional NMT is not well explored.

Machine Translation NMT +2

Learning to Refine Source Representations for Neural Machine Translation

no code implementations26 Dec 2018 Xinwei Geng, Long-Yue Wang, Xing Wang, Bing Qin, Ting Liu, Zhaopeng Tu

Neural machine translation (NMT) models generally adopt an encoder-decoder architecture for modeling the entire translation process.

Machine Translation NMT +2

Attribute Acquisition in Ontology based on Representation Learning of Hierarchical Classes and Attributes

no code implementations8 Mar 2019 Tianwen Jiang, Ming Liu, Bing Qin, Ting Liu

This paper investigates an attention-based automatic paradigm called TransATT for attribute acquisition, by learning the representation of hierarchical classes and attributes in Chinese ontology.

Attribute Relation +1

Learning to Ask Unanswerable Questions for Machine Reading Comprehension

no code implementations ACL 2019 Haichao Zhu, Li Dong, Furu Wei, Wenhui Wang, Bing Qin, Ting Liu

We also present a way to construct training data for our question generation models by leveraging the existing reading comprehension dataset.

Data Augmentation Machine Reading Comprehension +2

Neural Semantic Parsing in Low-Resource Settings with Back-Translation and Meta-Learning

no code implementations12 Sep 2019 Yibo Sun, Duyu Tang, Nan Duan, Yeyun Gong, Xiaocheng Feng, Bing Qin, Daxin Jiang

Neural semantic parsing has achieved impressive results in recent years, yet its success relies on the availability of large amounts of supervised data.

Meta-Learning Semantic Parsing +1

Multi-Input Multi-Output Sequence Labeling for Joint Extraction of Fact and Condition Tuples from Scientific Text

no code implementations IJCNLP 2019 Tianwen Jiang, Tong Zhao, Bing Qin, Ting Liu, Nitesh Chawla, Meng Jiang

In this work, we propose a new sequence labeling framework (as well as a new tag schema) to jointly extract the fact and condition tuples from statement sentences.

TAG valid

Transforming Wikipedia into Augmented Data for Query-Focused Summarization

no code implementations8 Nov 2019 Haichao Zhu, Li Dong, Furu Wei, Bing Qin, Ting Liu

The limited size of existing query-focused summarization datasets renders training data-driven summarization models challenging.

Data Augmentation Query-focused Summarization

An Annotation Scheme of A Large-scale Multi-party Dialogues Dataset for Discourse Parsing and Machine Comprehension

no code implementations8 Nov 2019 Jiaqi Li, Ming Liu, Bing Qin, Zihao Zheng, Ting Liu

In this paper, we propose the scheme for annotating large-scale multi-party chat dialogues for discourse parsing and machine comprehension.

Discourse Parsing Machine Reading Comprehension

Biomedical Knowledge Graph Refinement with Embedding and Logic Rules

no code implementations2 Dec 2020 Sendong Zhao, Bing Qin, Ting Liu, Fei Wang

This paper proposes a method BioGRER to improve the BioKG's quality, which comprehensively combines the knowledge graph embedding and logic rules that support and negate triplets in the BioKG.

Knowledge Graph Embedding Knowledge Graphs

An Iterative Emotion Interaction Network for Emotion Recognition in Conversations

no code implementations COLING 2020 Xin Lu, Yanyan Zhao, Yang Wu, Yijian Tian, Huipeng Chen, Bing Qin

We noticed that the gold emotion labels of the context utterances can provide explicit and accurate emotion interaction, but it is impossible to input gold labels at inference time.

Emotion Recognition in Conversation

Learning to Share by Masking the Non-shared for Multi-domain Sentiment Classification

no code implementations17 Apr 2021 Jianhua Yuan, Yanyan Zhao, Bing Qin, Ting Liu

To this end, we propose the BertMasker network which explicitly masks domain-related words from texts, learns domain-invariant sentiment features from these domain-agnostic texts, and uses those masked words to form domain-aware sentence representations.

General Classification Multi-Domain Sentiment Classification +3

DADgraph: A Discourse-aware Dialogue Graph Neural Network for Multiparty Dialogue Machine Reading Comprehension

no code implementations26 Apr 2021 Jiaqi Li, Ming Liu, Zihao Zheng, Heng Zhang, Bing Qin, Min-Yen Kan, Ting Liu

Multiparty Dialogue Machine Reading Comprehension (MRC) differs from traditional MRC as models must handle the complex dialogue discourse structure, previously unconsidered in traditional MRC.

Machine Reading Comprehension Question Answering

A Survey on Dialogue Summarization: Recent Advances and New Frontiers

no code implementations7 Jul 2021 Xiachong Feng, Xiaocheng Feng, Bing Qin

We hope that this first survey of dialogue summarization can provide the community with a quick access and a general picture to this task and motivate future researches.

Text Generation

CausalBERT: Injecting Causal Knowledge Into Pre-trained Models with Minimal Supervision

no code implementations21 Jul 2021 Zhongyang Li, Xiao Ding, Kuo Liao, Bing Qin, Ting Liu

Recent work has shown success in incorporating pre-trained models like BERT to improve NLP systems.

Causal Inference

GEDIT: Geographic-Enhanced and Dependency-Guided Tagging for Joint POI and Accessibility Extraction at Baidu Maps

no code implementations20 Aug 2021 Yibo Sun, Jizhou Huang, Chunyuan Yuan, Miao Fan, Haifeng Wang, Ming Liu, Bing Qin

We approach this task as a sequence tagging problem, where the goal is to produce <POI name, accessibility label> pairs from unstructured text.

Improving Controllable Text Generation with Position-Aware Weighted Decoding

no code implementations Findings (ACL) 2022 Yuxuan Gu, Xiaocheng Feng, Sicheng Ma, Jiaming Wu, Heng Gong, Bing Qin

Weighted decoding methods composed of the pretrained language model (LM) and the controller have achieved promising results for controllable text generation.

Attribute Language Modelling +2

A Graph Enhanced BERT Model for Event Prediction

no code implementations Findings (ACL) 2022 Li Du, Xiao Ding, Yue Zhang, Kai Xiong, Ting Liu, Bing Qin

To this end, we incorporate an additional structured variable into BERT to learn to predict the event connections in the training process.

MACSA: A Multimodal Aspect-Category Sentiment Analysis Dataset with Multimodal Fine-grained Aligned Annotations

no code implementations28 Jun 2022 Hao Yang, Yanyan Zhao, Jianwei Liu, Yang Wu, Bing Qin

In this paper, we propose a new dataset, the Multimodal Aspect-Category Sentiment Analysis (MACSA) dataset, which contains more than 21K text-image pairs.

Aspect Category Sentiment Analysis Sentiment Analysis

VEM$^2$L: A Plug-and-play Framework for Fusing Text and Structure Knowledge on Sparse Knowledge Graph Completion

no code implementations4 Jul 2022 Tao He, Ming Liu, Yixin Cao, Tianwen Jiang, Zihao Zheng, Jingrun Zhang, Sendong Zhao, Bing Qin

In this paper, we solve the sparse KGC from these two motivations simultaneously and handle their respective drawbacks further, and propose a plug-and-play unified framework VEM$^2$L over sparse KGs.

Knowledge Distillation Missing Elements +1

DiscrimLoss: A Universal Loss for Hard Samples and Incorrect Samples Discrimination

no code implementations21 Aug 2022 Tingting Wu, Xiao Ding, Hao Zhang, Jinglong Gao, Li Du, Bing Qin, Ting Liu

To relieve this issue, curriculum learning is proposed to improve model performance and generalization by ordering training samples in a meaningful (e. g., easy to hard) sequence.

Image Classification regression

Zero-shot Aspect-level Sentiment Classification via Explicit Utilization of Aspect-to-Document Sentiment Composition

no code implementations6 Sep 2022 Pengfei Deng, Jianhua Yuan, Yanyan Zhao, Bing Qin

Our key intuition is that the sentiment representation of a document is composed of the sentiment representations of all the aspects of that document.

Classification Sentiment Analysis +1

An Efficient End-to-End Transformer with Progressive Tri-modal Attention for Multi-modal Emotion Recognition

no code implementations20 Sep 2022 Yang Wu, Pai Peng, Zhenyu Zhang, Yanyan Zhao, Bing Qin

At the low-level, we propose the progressive tri-modal attention, which can model the tri-modal feature interactions by adopting a two-pass strategy and can further leverage such interactions to significantly reduce the computation and memory complexity through reducing the input token length.

Emotion Recognition

面向话题的讽刺识别:新任务、新数据和新方法(Topic-Oriented Sarcasm Detection: New Task, New Dataset and New Method)

no code implementations CCL 2022 Bin Liang, Zijie Lin, Bing Qin, Ruifeng Xu

“现有的文本讽刺识别研究通常只停留在句子级别的讽刺表达分类, 缺乏考虑讽刺对象对讽刺表达的影响。针对这一问题, 本文提出一个新的面向话题的讽刺识别任务。该任务通过话题的引入, 以话题作为讽刺对象, 有助于更好地理解和建模讽刺表达。对应地, 本文构建了一个新的面向话题的讽刺识别数据集。这个数据集包含了707个话题, 以及对应的4871个话题-评论对组。在此基础上, 基于提示学习和大规模预训练语言模型, 提出了一种面向话题的讽刺表达提示学习模型。在本文构建的面向话题讽刺识别数据集上的实验结果表明, 相比基线模型, 本文所提出的面向话题的讽刺表达提示学习模型取得了更优的性能。同时, 实验分析也表明本文提出的面向话题的讽刺识别任务相比传统的句子级讽刺识别任务更具挑战性。”

Sarcasm Detection

SSR: Utilizing Simplified Stance Reasoning Process for Robust Stance Detection

no code implementations COLING 2022 Jianhua Yuan, Yanyan Zhao, Yanyue Lu, Bing Qin

Motivated by how humans tackle stance detection tasks, we propose to incorporate the stance reasoning process as task knowledge to assist in learning genuine features and reducing reliance on bias features.

Sentence Stance Detection

BigCilin: An Automatic Chinese Open-domain Knowledge Graph with Fine-grained Hypernym-Hyponym Relations

no code implementations7 Nov 2022 Ming Liu, Yaojia LV, Jingrun Zhang, Ruiji Fu, Bing Qin

One is that it supports querying any Chinese named entity and browsing the extracted hypernym-hyponym paths surro-unding the query entity.

Debiasing Stance Detection Models with Counterfactual Reasoning and Adversarial Bias Learning

no code implementations20 Dec 2022 Jianhua Yuan, Yanyan Zhao, Bing Qin

Stance detection models may tend to rely on dataset bias in the text part as a shortcut and thus fail to sufficiently learn the interaction between the targets and texts.

counterfactual Counterfactual Inference +2

Semantic-aware Contrastive Learning for Electroencephalography-to-Text Generation with Curriculum Learning

no code implementations23 Jan 2023 Xiachong Feng, Xiaocheng Feng, Bing Qin

To mitigate this challenge, we devise a Curriculum Semantic-aware Contrastive Learning strategy (C-SCL), which effectively re-calibrates the subject-dependent EEG representation to the semantic-dependent EEG representation, thus reducing the discrepancy.

Contrastive Learning EEG +1

STOA-VLP: Spatial-Temporal Modeling of Object and Action for Video-Language Pre-training

no code implementations20 Feb 2023 Weihong Zhong, Mao Zheng, Duyu Tang, Xuan Luo, Heng Gong, Xiaocheng Feng, Bing Qin

Although large-scale video-language pre-training models, which usually build a global alignment between the video and the text, have achieved remarkable progress on various downstream tasks, the idea of adopting fine-grained information during the pre-training stage is not well explored.

Language Modelling Object +5

Hierarchical Catalogue Generation for Literature Review: A Benchmark

1 code implementation7 Apr 2023 Kun Zhu, Xiaocheng Feng, Xiachong Feng, Yingsheng Wu, Bing Qin

Scientific literature review generation aims to extract and organize important information from an abundant collection of reference papers and produces corresponding reviews while lacking a clear and logical hierarchy.

Informativeness Review Generation

Global Prompt Cell: A Portable Control Module for Effective Prompt Tuning

no code implementations12 Apr 2023 Chi Liu, Haochun Wang, Nuwa Xi, Sendong Zhao, Bing Qin

As a novel approach to tuning pre-trained models, prompt tuning involves freezing the parameters in downstream tasks while inserting trainable embeddings into inputs in the first layer.

Is ChatGPT Equipped with Emotional Dialogue Capabilities?

no code implementations19 Apr 2023 Weixiang Zhao, Yanyan Zhao, Xin Lu, Shilong Wang, Yanpeng Tong, Bing Qin

This report presents a study on the emotional dialogue capability of ChatGPT, an advanced language model developed by OpenAI.

Dialogue Understanding Language Modelling

The Role of Summarization in Generative Agents: A Preliminary Perspective

no code implementations2 May 2023 Xiachong Feng, Xiaocheng Feng, Bing Qin

Generative agents that simulate human society show tremendous potential for further research and practical applications.

Improving Cross-Task Generalization with Step-by-Step Instructions

no code implementations8 May 2023 Yang Wu, Yanyan Zhao, Zhongyang Li, Bing Qin, Kai Xiong

Instruction tuning has been shown to be able to improve cross-task generalization of language models.

UNIMO-3: Multi-granularity Interaction for Vision-Language Representation Learning

no code implementations23 May 2023 Hao Yang, Can Gao, Hao Líu, Xinyan Xiao, Yanyan Zhao, Bing Qin

The experimental results show that our model achieves state-of-the-art performance in various downstream tasks, and through ablation study can prove that effective cross-layer learning improves the model's ability of multimodal representation.

Representation Learning

SmartTrim: Adaptive Tokens and Attention Pruning for Efficient Vision-Language Models

no code implementations24 May 2023 Zekun Wang, Jingchang Chen, Wangchunshu Zhou, Haichao Zhu, Jiafeng Liang, Liping Shan, Ming Liu, Dongliang Xu, Qing Yang, Bing Qin

Despite achieving remarkable performance on various vision-language tasks, Transformer-based Vision-Language Models (VLMs) suffer from redundancy in inputs and parameters, significantly hampering their efficiency in real-world applications.

Data Augmentation

Improved Visual Story Generation with Adaptive Context Modeling

no code implementations26 May 2023 Zhangyin Feng, Yuchen Ren, Xinmiao Yu, Xiaocheng Feng, Duyu Tang, Shuming Shi, Bing Qin

Diffusion models developed on top of powerful text-to-image generation models like Stable Diffusion achieve remarkable success in visual story generation.

Story Generation Story Visualization +1

SkillNet-X: A Multilingual Multitask Model with Sparsely Activated Skills

no code implementations28 Jun 2023 Zhangyin Feng, Yong Dai, Fan Zhang, Duyu Tang, Xiaocheng Feng, Shuangzhi Wu, Bing Qin, Yunbo Cao, Shuming Shi

Traditional multitask learning methods basically can only exploit common knowledge in task- or language-wise, which lose either cross-language or cross-task knowledge.

Natural Language Understanding

UniCoRN: Unified Cognitive Signal ReconstructioN bridging cognitive signals and human language

no code implementations6 Jul 2023 Nuwa Xi, Sendong Zhao, Haochun Wang, Chi Liu, Bing Qin, Ting Liu

In this paper, we propose fMRI2text, the first openvocabulary task aiming to bridge fMRI time series and human language.

Brain Computer Interface EEG +2

Adapter-based Selective Knowledge Distillation for Federated Multi-domain Meeting Summarization

no code implementations7 Aug 2023 Xiachong Feng, Xiaocheng Feng, Xiyuan Du, Min-Yen Kan, Bing Qin

However, existing work has focused on training models on centralized data, neglecting real-world scenarios where meeting data are infeasible to collect centrally, due to their sensitive nature.

Federated Learning Knowledge Distillation +1

Make Your Decision Convincing! A Unified Two-Stage Framework: Self-Attribution and Decision-Making

no code implementations20 Oct 2023 Yanrui Du, Sendong Zhao, Haochun Wang, Yuhan Chen, Rui Bai, Zewen Qiang, MuZhen Cai, Bing Qin

Through extensive experiments on five reasoning datasets from the ERASER benchmark, we demonstrate that our framework not only establishes a more reliable link between the generated rationale and model decision but also achieves competitive results in task performance and the quality of rationale.

Decision Making

MTGER: Multi-view Temporal Graph Enhanced Temporal Reasoning over Time-Involved Document

no code implementations8 Nov 2023 Zheng Chu, Zekun Wang, Jiafeng Liang, Ming Liu, Bing Qin

To address this issue, we propose MTGER, a novel Multi-view Temporal Graph Enhanced Temporal Reasoning framework for temporal reasoning over time-involved documents.

Trends in Integration of Knowledge and Large Language Models: A Survey and Taxonomy of Methods, Benchmarks, and Applications

no code implementations10 Nov 2023 Zhangyin Feng, Weitao Ma, Weijiang Yu, Lei Huang, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, Ting Liu

In this paper, we propose a review to discuss the trends in integration of knowledge and large language models, including taxonomy of methods, benchmarks, and applications.

knowledge editing Retrieval

Analyzing the Inherent Response Tendency of LLMs: Real-World Instructions-Driven Jailbreak

no code implementations7 Dec 2023 Yanrui Du, Sendong Zhao, Ming Ma, Yuhan Chen, Bing Qin

The jailbreak idea of our method is "Inherent Response Tendency Analysis" which identifies real-world instructions that can inherently induce LLMs to generate affirmation responses and the corresponding jailbreak strategy is "Real-World Instructions-Driven Jailbreak" which involves strategically splicing real-world instructions identified through the above analysis around the malicious instruction.

Emage: Non-Autoregressive Text-to-Image Generation

no code implementations22 Dec 2023 Zhangyin Feng, Runyi Hu, Liangxin Liu, Fan Zhang, Duyu Tang, Yong Dai, Xiaocheng Feng, Jiwei Li, Bing Qin, Shuming Shi

Compared with autoregressive baselines that needs to run one thousand times, our model only runs 16 times to generate images of competitive quality with an order of magnitude lower inference latency.

Denoising Text-to-Image Generation

Length Extrapolation of Transformers: A Survey from the Perspective of Positional Encoding

no code implementations28 Dec 2023 Liang Zhao, Xiaocheng Feng, Xiachong Feng, Dongliang Xu, Qing Yang, Hongtao Liu, Bing Qin, Ting Liu

In this survey, we present these advances towards length extrapolation in a unified notation from the perspective of PE.

Position

Aligning Translation-Specific Understanding to General Understanding in Large Language Models

no code implementations10 Jan 2024 Yichong Huang, Xiaocheng Feng, Baohang Li, Chengpeng Fu, Wenshuai Huo, Ting Liu, Bing Qin

To align the translation-specific understanding to the general one, we propose a novel translation process xIoD (Cross-Lingual Interpretation of Difficult words), explicitly incorporating the general understanding on the content incurring inconsistent understanding to guide the translation.

Machine Translation Translation

SAPT: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language Models

no code implementations16 Jan 2024 Weixiang Zhao, Shilong Wang, Yulin Hu, Yanyan Zhao, Bing Qin, Xuanyu Zhang, Qing Yang, Dongliang Xu, Wanxiang Che

Existing methods devise the learning module to acquire task-specific knowledge with parameter-efficient tuning (PET) block and the selection module to pick out the corresponding one for the testing input, aiming at handling the challenges of catastrophic forgetting and knowledge transfer in CL.

Continual Learning Transfer Learning

Beyond Direct Diagnosis: LLM-based Multi-Specialist Agent Consultation for Automatic Diagnosis

no code implementations29 Jan 2024 Haochun Wang, Sendong Zhao, Zewen Qiang, Nuwa Xi, Bing Qin, Ting Liu

Automatic diagnosis is a significant application of AI in healthcare, where diagnoses are generated based on the symptom description of patients.

Natural Language Understanding

Beyond the Answers: Reviewing the Rationality of Multiple Choice Question Answering for the Evaluation of Large Language Models

no code implementations2 Feb 2024 Haochun Wang, Sendong Zhao, Zewen Qiang, Bing Qin, Ting Liu

In the field of natural language processing (NLP), Large Language Models (LLMs) have precipitated a paradigm shift, markedly enhancing performance in natural language generation tasks.

Multiple-choice Multiple Choice Question Answering (MCQA) +1

Both Matter: Enhancing the Emotional Intelligence of Large Language Models without Compromising the General Intelligence

no code implementations15 Feb 2024 Weixiang Zhao, Zhuojun Li, Shilong Wang, Yang Wang, Yulin Hu, Yanyan Zhao, Chen Wei, Bing Qin

Emotional Intelligence (EI), consisting of emotion perception, emotion cognition and emotion expression, plays the critical roles in improving user interaction experience for the current large language model (LLM) based conversational general AI assistants.

Emotional Intelligence Language Modelling +1

Deciphering the Impact of Pretraining Data on Large Language Models through Machine Unlearning

no code implementations18 Feb 2024 Yang Zhao, Li Du, Xiao Ding, Kai Xiong, Zhouhao Sun, Jun Shi, Ting Liu, Bing Qin

Through pretraining on a corpus with various sources, Large Language Models (LLMs) have gained impressive performance.

Machine Unlearning

How does Architecture Influence the Base Capabilities of Pre-trained Language Models? A Case Study Based on FFN-Wider Transformer Models

no code implementations4 Mar 2024 Xin Lu, Yanyan Zhao, Bing Qin

In this work, we attempt to explain and reverse the decline in base capabilities caused by the architecture of FFN-Wider Transformers, seeking to provide some insights.

Few-Shot Learning Language Modelling +1

Vanilla Transformers are Transfer Capability Teachers

no code implementations4 Mar 2024 Xin Lu, Yanyan Zhao, Bing Qin

However, studies have indicated that MoE Transformers underperform vanilla Transformers in many downstream tasks, significantly diminishing the practical value of MoE models.

Computational Efficiency

AS-ES Learning: Towards Efficient CoT Learning in Small Models

no code implementations4 Mar 2024 Nuwa Xi, Yuhan Chen, Sendong Zhao, Haochun Wang, Bing Qin, Ting Liu

Chain-of-Thought (CoT) serves as a critical emerging ability in LLMs, especially when it comes to logical reasoning.

Data Augmentation Logical Reasoning

Meaningful Learning: Advancing Abstract Reasoning in Large Language Models via Generic Fact Guidance

no code implementations14 Mar 2024 Kai Xiong, Xiao Ding, Ting Liu, Bing Qin, Dongliang Xu, Qing Yang, Hongtao Liu, Yixin Cao

Large language models (LLMs) have developed impressive performance and strong explainability across various reasoning scenarios, marking a significant stride towards mimicking human-like intelligence.

Memorization

RU22Fact: Optimizing Evidence for Multilingual Explainable Fact-Checking on Russia-Ukraine Conflict

1 code implementation25 Mar 2024 Yirong Zeng, Xiao Ding, Yi Zhao, Xiangyu Li, Jie Zhang, Chao Yao, Ting Liu, Bing Qin

Furthermore, we construct RU22Fact, a novel multilingual explainable fact-checking dataset on the Russia-Ukraine conflict in 2022 of 16K samples, each containing real-world claims, optimized evidence, and referenced explanation.

16k Claim Verification +4

Cannot find the paper you are looking for? You can Submit a new open access paper.