Search Results for author: Sirui Wang

Found 17 papers, 9 papers with code

Table Fact Verification with Structure-Aware Transformer

no code implementations EMNLP 2020 Hongzhi Zhang, Yingyao Wang, Sirui Wang, Xuezhi Cao, Fuzheng Zhang, Zhongyuan Wang

Verifying fact on semi-structured evidence like tables requires the ability to encode structural information and perform symbolic reasoning.

Fact Verification

CQG: A Simple and Effective Controlled Generation Framework for Multi-hop Question Generation

1 code implementation ACL 2022 Zichu Fei, Qi Zhang, Tao Gui, Di Liang, Sirui Wang, Wei Wu, Xuanjing Huang

CQG employs a simple method to generate the multi-hop questions that contain key entities in multi-hop reasoning chains, which ensure the complexity and quality of the questions.

Question Generation Question-Generation

Large-Scale Relation Learning for Question Answering over Knowledge Bases with Pre-trained Language Models

1 code implementation EMNLP 2021 Yuanmeng Yan, Rumei Li, Sirui Wang, Hongzhi Zhang, Zan Daoguang, Fuzheng Zhang, Wei Wu, Weiran Xu

The key challenge of question answering over knowledge bases (KBQA) is the inconsistency between the natural language questions and the reasoning paths in the knowledge base (KB).

Question Answering Relation Extraction

Robust Lottery Tickets for Pre-trained Language Models

1 code implementation ACL 2022 Rui Zheng, Rong Bao, Yuhao Zhou, Di Liang, Sirui Wang, Wei Wu, Tao Gui, Qi Zhang, Xuanjing Huang

Recent works on Lottery Ticket Hypothesis have shown that pre-trained language models (PLMs) contain smaller matching subnetworks(winning tickets) which are capable of reaching accuracy comparable to the original models.

Adversarial Robustness

Improving Semantic Matching through Dependency-Enhanced Pre-trained Model with Adaptive Fusion

no code implementations16 Oct 2022 Jian Song, Di Liang, Rumei Li, Yuntao Li, Sirui Wang, Minlong Peng, Wei Wu, Yongxin Yu

Transformer-based pre-trained models like BERT have achieved great progress on Semantic Sentence Matching.

DABERT: Dual Attention Enhanced BERT for Semantic Matching

no code implementations COLING 2022 Sirui Wang, Di Liang, Jian Song, Yuntao Li, Wei Wu

To alleviate this problem, we propose a novel Dual Attention Enhanced BERT (DABERT) to enhance the ability of BERT to capture fine-grained differences in sentence pairs.

Let Me Check the Examples: Enhancing Demonstration Learning via Explicit Imitation

no code implementations31 Aug 2022 Sirui Wang, Kaiwen Wei, Hongzhi Zhang, Yuntao Li, Wei Wu

Inspired by the human learning process, in this paper, we introduce Imitation DEMOnstration Learning (Imitation-Demo) to strengthen demonstration learning via explicitly imitating human review behaviour, which includes: (1) contrastive learning mechanism to concentrate on the similar demonstrations.

Association Contrastive Learning

InstructionNER: A Multi-Task Instruction-Based Generative Framework for Few-shot NER

1 code implementation8 Mar 2022 LiWen Wang, Rumei Li, Yang Yan, Yuanmeng Yan, Sirui Wang, Wei Wu, Weiran Xu

Recently, prompt-based methods have achieved significant performance in few-shot learning scenarios by bridging the gap between language model pre-training and fine-tuning for downstream tasks.

Entity Typing Few-Shot Learning +3

Pay More Attention to History: A Context Modelling Strategy for Conversational Text-to-SQL

1 code implementation16 Dec 2021 Yuntao Li, Hanchu Zhang, Yutian Li, Sirui Wang, Wei Wu, Yan Zhang

Conversational text-to-SQL aims at converting multi-turn natural language queries into their corresponding SQL (Structured Query Language) representations.

Natural Language Queries Semantic Parsing +1

Learn with Noisy Data via Unsupervised Loss Correction for Weakly Supervised Reading Comprehension

no code implementations COLING 2020 Xuemiao Zhang, Kun Zhou, Sirui Wang, Fuzheng Zhang, Zhongyuan Wang, Junfei Liu

Weakly supervised machine reading comprehension (MRC) task is practical and promising for its easily available and massive training data, but inevitablely introduces noise.

Machine Reading Comprehension

Leveraging Historical Interaction Data for Improving Conversational Recommender System

no code implementations19 Aug 2020 Kun Zhou, Wayne Xin Zhao, Hui Wang, Sirui Wang, Fuzheng Zhang, Zhongyuan Wang, Ji-Rong Wen

Most of the existing CRS methods focus on learning effective preference representations for users from conversation data alone.

Recommendation Systems

S^3-Rec: Self-Supervised Learning for Sequential Recommendation with Mutual Information Maximization

2 code implementations18 Aug 2020 Kun Zhou, Hui Wang, Wayne Xin Zhao, Yutao Zhu, Sirui Wang, Fuzheng Zhang, Zhongyuan Wang, Ji-Rong Wen

To tackle this problem, we propose the model S^3-Rec, which stands for Self-Supervised learning for Sequential Recommendation, based on the self-attentive neural architecture.

Association Self-Supervised Learning +1

BATS: A Spectral Biclustering Approach to Single Document Topic Modeling and Segmentation

no code implementations5 Aug 2020 Qiong Wu, Adam Hare, Sirui Wang, Yuwei Tu, Zhenming Liu, Christopher G. Brinton, Yanhua Li

In this work, we reexamine the inter-related problems of "topic identification" and "text segmentation" for sparse document learning, when there is a single new text of interest.

Text Segmentation Topic Models

Cannot find the paper you are looking for? You can Submit a new open access paper.