Search Results for author: Wenpeng Yin

Found 38 papers, 13 papers with code

DocNLI: A Large-scale Dataset for Document-level Natural Language Inference

1 code implementation17 Jun 2021 Wenpeng Yin, Dragomir Radev, Caiming Xiong

It has been studied intensively in the past few years thanks to the availability of large-scale labeled datasets.

Document-level Natural Language Inference +2

Incremental Few-shot Text Classification with Multi-round New Classes: Formulation, Dataset and System

1 code implementation NAACL 2021 Congying Xia, Wenpeng Yin, Yihao Feng, Philip Yu

Two major challenges exist in this new task: (i) For the learning process, the system should incrementally learn new classes round by round without re-training on the examples of preceding classes; (ii) For the performance, the system should perform well on new classes without much loss on preceding classes.

Few-Shot Text Classification General Classification +3

Learning to Synthesize Data for Semantic Parsing

1 code implementation NAACL 2021 Bailin Wang, Wenpeng Yin, Xi Victoria Lin, Caiming Xiong

Moreover, explicitly modeling compositions using PCFG leads to a better exploration of unseen programs, thus generate more diverse data.

Domain Generalization Semantic Parsing +2

Paired Representation Learning for Event and Entity Coreference

no code implementations24 Oct 2020 Xiaodong Yu, Wenpeng Yin, Dan Roth

Given a pair of elements (Events or Entities) our model treats the pair's sentences as a single sequence so that each element in the pair learns its representation by encoding its own context as well the other element's context.

Coreference Resolution Event Cross-Document Coreference Resolution +1

Universal Natural Language Processing with Limited Annotations: Try Few-shot Textual Entailment as a Start

1 code implementation EMNLP 2020 Wenpeng Yin, Nazneen Fatema Rajani, Dragomir Radev, Richard Socher, Caiming Xiong

We demonstrate that this framework enables a pretrained entailment model to work well on new entailment domains in a few-shot setting, and show its effectiveness as a unified solver for several downstream NLP tasks such as question answering and coreference resolution when the end-task annotations are limited.

Coreference Resolution Natural Language Inference +1

Mixup-Transformer: Dynamic Data Augmentation for NLP Tasks

no code implementations5 Oct 2020 Lichao Sun, Congying Xia, Wenpeng Yin, TingTing Liang, Philip S. Yu, Lifang He

Our studies show that mixup is a domain-independent data augmentation technique to pre-trained language models, resulting in significant performance improvement for transformer-based models.

Data Augmentation Image Classification

Meta-learning for Few-shot Natural Language Processing: A Survey

no code implementations19 Jul 2020 Wenpeng Yin

If the target task itself cannot provide more information, how about collecting more tasks equipped with rich annotations to help the model learning?


CO-Search: COVID-19 Information Retrieval with Semantic Search, Question Answering, and Abstractive Summarization

no code implementations17 Jun 2020 Andre Esteva, Anuprit Kale, Romain Paulus, Kazuma Hashimoto, Wenpeng Yin, Dragomir Radev, Richard Socher

The COVID-19 global pandemic has resulted in international efforts to understand, track, and mitigate the disease, yielding a significant corpus of COVID-19 and SARS-CoV-2-related publications across scientific disciplines.

Abstractive Text Summarization Information Retrieval +2

Adv-BERT: BERT is not robust on misspellings! Generating nature adversarial samples on BERT

no code implementations27 Feb 2020 Lichao Sun, Kazuma Hashimoto, Wenpeng Yin, Akari Asai, Jia Li, Philip Yu, Caiming Xiong

There is an increasing amount of literature that claims the brittleness of deep neural networks in dealing with adversarial examples that are created maliciously.

Question Answering Sentiment Analysis

Benchmarking Zero-shot Text Classification: Datasets, Evaluation and Entailment Approach

4 code implementations IJCNLP 2019 Wenpeng Yin, Jamaal Hay, Dan Roth

0Shot-TC aims to associate an appropriate label with a piece of text, irrespective of the text domain and the aspect (e. g., topic, emotion, event, etc.)

General Classification Natural Language Inference +1

Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing

no code implementations16 Aug 2019 Jianquan Li, Xiaokang Liu, Wenpeng Yin, Min Yang, Liqun Ma, Yaohong Jin

Multi-Task Learning (MTL) aims at boosting the overall performance of each individual task by leveraging useful information contained in multiple related tasks.

Multi-Task Learning

TwoWingOS: A Two-Wing Optimization Strategy for Evidential Claim Verification

no code implementations EMNLP 2018 Wenpeng Yin, Dan Roth

We develop TwoWingOS (two-wing optimization strategy), a system that, while identifying appropriate evidence for a claim, also determines whether or not the claim is supported by the evidence.

Natural Language Inference

Term Definitions Help Hypernymy Detection

no code implementations SEMEVAL 2018 Wenpeng Yin, Dan Roth

Existing methods of hypernymy detection mainly rely on statistics over a big corpus, either mining some co-occurring patterns like "animals such as cats" or embedding words of interest into context-aware vectors.

Attentive Convolution: Equipping CNNs with RNN-style Attention Mechanisms

1 code implementation TACL 2018 Wenpeng Yin, Hinrich Schütze

We hypothesize that this is because the attention in CNNs has been mainly implemented as attentive pooling (i. e., it is applied to pooling) rather than as attentive convolution (i. e., it is integrated into convolution).

Natural Language Inference Representation Learning +1

Comparative Study of CNN and RNN for Natural Language Processing

3 code implementations7 Feb 2017 Wenpeng Yin, Katharina Kann, Mo Yu, Hinrich Schütze

Deep neural networks (DNN) have revolutionized the field of natural language processing (NLP).

Task-Specific Attentive Pooling of Phrase Alignments Contributes to Sentence Matching

no code implementations EACL 2017 Wenpeng Yin, Hinrich Schütze

This work studies comparatively two typical sentence matching tasks: textual entailment (TE) and answer selection (AS), observing that weaker phrase alignments are more critical in TE, while stronger phrase alignments deserve more attention in AS.

Answer Selection Natural Language Inference +1

Simple Question Answering by Attentive Convolutional Neural Network

no code implementations COLING 2016 Wenpeng Yin, Mo Yu, Bing Xiang, Bo-Wen Zhou, Hinrich Schütze

In fact selection, we match the subject entity in a fact candidate with the entity mention in the question by a character-level convolutional neural network (char-CNN), and match the predicate in that fact with the question by a word-level CNN (word-CNN).

Entity Linking Question Answering

Why and How to Pay Different Attention to Phrase Alignments of Different Intensities

no code implementations23 Apr 2016 Wenpeng Yin, Hinrich Schütze

We address the problems of identifying phrase alignments of flexible granularity and pooling alignments of different intensities for these tasks.

Answer Selection Natural Language Inference +1

Online Updating of Word Representations for Part-of-Speech Tagging

no code implementations EMNLP 2015 Wenpeng Yin, Tobias Schnabel, Hinrich Schütze

We propose online unsupervised domain adaptation (DA), which is performed incrementally as data comes in and is applicable when batch DA is not possible.

Part-Of-Speech Tagging Unsupervised Domain Adaptation

Discriminative Phrase Embedding for Paraphrase Identification

no code implementations HLT 2015 Wenpeng Yin, Hinrich Schütze

This work, concerning paraphrase identification task, on one hand contributes to expanding deep learning embeddings to include continuous and discontinuous linguistic phrases.

Paraphrase Identification

ABCNN: Attention-Based Convolutional Neural Network for Modeling Sentence Pairs

8 code implementations TACL 2016 Wenpeng Yin, Hinrich Schütze, Bing Xiang, Bo-Wen Zhou

(ii) We propose three attention schemes that integrate mutual influence between sentences into CNN; thus, the representation of each sentence takes into consideration its counterpart.

Answer Selection Natural Language Inference +1

Learning Meta-Embeddings by Using Ensembles of Embedding Sets

1 code implementation18 Aug 2015 Wenpeng Yin, Hinrich Schütze

Word embeddings -- distributed representations of words -- in deep learning are beneficial for many tasks in natural language processing (NLP).

Part-Of-Speech Tagging Word Embeddings +1

Deep Learning Embeddings for Discontinuous Linguistic Units

no code implementations18 Dec 2013 Wenpeng Yin, Hinrich Schütze

Deep learning embeddings have been successfully used for many natural language processing problems.

Coreference Resolution

Cannot find the paper you are looking for? You can Submit a new open access paper.