Search Results for author: Wanjun Zhong

Found 17 papers, 7 papers with code

LogiGAN: Learning Logical Reasoning via Adversarial Pre-training

no code implementations18 May 2022 Xinyu Pi, Wanjun Zhong, Yan Gao, Nan Duan, Jian-Guang Lou

We present LogiGAN, an unsupervised adversarial pre-training framework for improving logical reasoning abilities of language models.

Modeling Semantic Composition with Syntactic Hypergraph for Video Question Answering

no code implementations13 May 2022 Zenan Xu, Wanjun Zhong, Qinliang Su, Zijing Ou, Fuwei Zhang

A key challenge in video question answering is how to realize the cross-modal semantic alignment between textual concepts and corresponding visual objects.

Question Answering Semantic Composition +1

ProQA: Structural Prompt-based Pre-training for Unified Question Answering

no code implementations9 May 2022 Wanjun Zhong, Yifan Gao, Ning Ding, Yujia Qin, Zhiyuan Liu, Ming Zhou, Jiahai Wang, Jian Yin, Nan Duan

Furthermore, ProQA exhibits strong ability in both continual learning and transfer learning by taking the advantages of the structural prompt.

Continual Learning Few-Shot Learning +3

Reasoning over Hybrid Chain for Table-and-Text Open Domain QA

1 code implementation15 Jan 2022 Wanjun Zhong, JunJie Huang, Qian Liu, Ming Zhou, Jiahai Wang, Jian Yin, Nan Duan

CARP utilizes hybrid chain to model the explicit intermediate reasoning process across table and text for question answering.

Open-Domain Question Answering

AR-LSAT: Investigating Analytical Reasoning of Text

1 code implementation14 Apr 2021 Wanjun Zhong, Siyuan Wang, Duyu Tang, Zenan Xu, Daya Guo, Jiahai Wang, Jian Yin, Ming Zhou, Nan Duan

Analytical reasoning is an essential and challenging task that requires a system to analyze a scenario involving a set of particular circumstances and perform reasoning over it to make conclusions.

Syntax-Enhanced Pre-trained Model

1 code implementation ACL 2021 Zenan Xu, Daya Guo, Duyu Tang, Qinliang Su, Linjun Shou, Ming Gong, Wanjun Zhong, Xiaojun Quan, Nan Duan, Daxin Jiang

We study the problem of leveraging the syntactic structure of text to enhance pre-trained models such as BERT and RoBERTa.

Entity Typing Question Answering +1

Neural Deepfake Detection with Factual Structure of Text

no code implementations EMNLP 2020 Wanjun Zhong, Duyu Tang, Zenan Xu, Ruize Wang, Nan Duan, Ming Zhou, Jiahai Wang, Jian Yin

To address this, we propose a graph-based model that utilizes the factual structure of a document for deepfake detection of text.

DeepFake Detection Face Swapping

Reasoning Over Semantic-Level Graph for Fact Checking

1 code implementation ACL 2020 Wanjun Zhong, Jingjing Xu, Duyu Tang, Zenan Xu, Nan Duan, Ming Zhou, Jiahai Wang, Jian Yin

We evaluate our system on FEVER, a benchmark dataset for fact checking, and find that rich structural information is helpful and both our graph-based mechanisms improve the accuracy.

Fact Checking Graph Attention +2

Improving Question Answering by Commonsense-Based Pre-Training

no code implementations5 Sep 2018 Wanjun Zhong, Duyu Tang, Nan Duan, Ming Zhou, Jiahai Wang, Jian Yin

Although neural network approaches achieve remarkable success on a variety of NLP tasks, many of them struggle to answer questions that require commonsense knowledge.

Question Answering

Cannot find the paper you are looking for? You can Submit a new open access paper.