Search Results for author: Wenpeng Yin

Found 65 papers, 28 papers with code

A Generic Method for Fine-grained Category Discovery in Natural Language Texts

no code implementations18 Jun 2024 Chang Tian, Matthew B. Blaschko, Wenpeng Yin, Mingzhe Xing, Yinliang Yue, Marie-Francine Moens

To address these shortcomings, we introduce a method that successfully detects fine-grained clusters of semantically similar texts guided by a novel objective function.

Contrastive Learning

Can Prompt Modifiers Control Bias? A Comparative Analysis of Text-to-Image Generative Models

no code implementations9 Jun 2024 Philip Wootaek Shin, Jihyun Janice Ahn, Wenpeng Yin, Jack Sampson, Vijaykrishnan Narayanan

Our contributions-panning comparative analyses, the strategic use of prompt modifiers, the exploration of prompt sequencing effects, and the introduction of a bias sensitivity taxonomy-lay the groundwork for the development of common metrics and standard analyses for evaluating whether and how future AI models exhibit and respond to requests to adjust for inherent biases.

Ethics Prompt Engineering +1

Fighting Against the Repetitive Training and Sample Dependency Problem in Few-shot Named Entity Recognition

no code implementations8 Jun 2024 Chang Tian, Wenpeng Yin, Dan Li, Marie-Francine Moens

The general pipeline consists of a span detector to identify entity spans in text and an entity-type classifier to assign types to entities.

few-shot-ner Few-shot NER +5

X-Shot: A Unified System to Handle Frequent, Few-shot and Zero-shot Learning Simultaneously in Classification

1 code implementation6 Mar 2024 Hanzi Xu, Muhao Chen, Lifu Huang, Slobodan Vucetic, Wenpeng Yin

In recent years, few-shot and zero-shot learning, which learn to predict labels with limited annotated instances, have garnered significant attention.

Domain Generalization Instruction Following +1

FOFO: A Benchmark to Evaluate LLMs' Format-Following Capability

1 code implementation28 Feb 2024 Congying Xia, Chen Xing, Jiangshu Du, Xinyi Yang, Yihao Feng, ran Xu, Wenpeng Yin, Caiming Xiong

This paper presents FoFo, a pioneering benchmark for evaluating large language models' (LLMs) ability to follow complex, domain-specific formats, a crucial yet underexamined capability for their application as AI agents.

Multimodal Instruction Tuning with Conditional Mixture of LoRA

no code implementations24 Feb 2024 Ying Shen, Zhiyang Xu, Qifan Wang, Yu Cheng, Wenpeng Yin, Lifu Huang

Multimodal Large Language Models (MLLMs) have demonstrated remarkable proficiency in diverse tasks across different domains, with an increasing focus on improving their zero-shot generalization capabilities for unseen multimodal tasks.

Zero-shot Generalization

Contrastive Instruction Tuning

1 code implementation17 Feb 2024 Tianyi Lorena Yan, Fei Wang, James Y. Huang, Wenxuan Zhou, Fan Yin, Aram Galstyan, Wenpeng Yin, Muhao Chen

Instruction tuning has been used as a promising approach to improve the performance of large language models (LLMs) on unseen tasks.

Sentence

Large Language Models for Mathematical Reasoning: Progresses and Challenges

no code implementations31 Jan 2024 Janice Ahn, Rishu Verma, Renze Lou, Di Liu, Rui Zhang, Wenpeng Yin

Mathematical reasoning serves as a cornerstone for assessing the fundamental cognitive capabilities of human intelligence.

Math Mathematical Reasoning

MT-Ranker: Reference-free machine translation evaluation by inter-system ranking

1 code implementation30 Jan 2024 Ibraheem Muhammad Moosa, Rui Zhang, Wenpeng Yin

Traditionally, Machine Translation (MT) Evaluation has been treated as a regression problem -- producing an absolute translation-quality score.

Machine Translation Natural Language Inference +2

GPTBIAS: A Comprehensive Framework for Evaluating Bias in Large Language Models

no code implementations11 Dec 2023 Jiaxu Zhao, Meng Fang, Shirui Pan, Wenpeng Yin, Mykola Pechenizkiy

In this work, we propose a bias evaluation framework named GPTBIAS that leverages the high performance of LLMs (e. g., GPT-4 \cite{openai2023gpt4}) to assess bias in models.

MUFFIN: Curating Multi-Faceted Instructions for Improving Instruction-Following

no code implementations5 Dec 2023 Renze Lou, Kai Zhang, Jian Xie, Yuxuan Sun, Janice Ahn, Hanzi Xu, Yu Su, Wenpeng Yin

In the realm of large language models (LLMs), enhancing instruction-following capability often involves curating expansive training data.

Instruction Following

Unified Low-Resource Sequence Labeling by Sample-Aware Dynamic Sparse Finetuning

1 code implementation7 Nov 2023 Sarkar Snigdha Sarathi Das, Ranran Haoran Zhang, Peng Shi, Wenpeng Yin, Rui Zhang

Unfortunately, this requires formatting them into specialized augmented format unknown to the base pretrained language model (PLMs) necessitating finetuning to the target format.

In-Context Learning Language Modelling +6

All Labels Together: Low-shot Intent Detection with an Efficient Label Semantic Encoding Paradigm

no code implementations7 Sep 2023 Jiangshu Du, Congying Xia, Wenpeng Yin, TingTing Liang, Philip S. Yu

In intent detection tasks, leveraging meaningful semantic information from intent labels can be particularly beneficial for few-shot scenarios.

Domain Generalization Intent Detection

Toward Zero-Shot Instruction Following

1 code implementation4 Aug 2023 Renze Lou, Wenpeng Yin

This work proposes a challenging yet more realistic setting for zero-shot cross-task generalization: zero-shot instruction following, presuming the existence of a paragraph-style task definition while no demonstrations exist.

Instruction Following

Large Language Model Instruction Following: A Survey of Progresses and Challenges

1 code implementation18 Mar 2023 Renze Lou, Kai Zhang, Wenpeng Yin

This survey paper tries to summarize and provide insights to the current research on instruction following, particularly, by answering the following questions: (i) What is task instruction, and what instruction types exist?

Instruction Following Language Modelling +1

Robustness of Learning from Task Instructions

1 code implementation7 Dec 2022 Jiasheng Gu, Hongyu Zhao, Hanzi Xu, Liangyu Nie, Hongyuan Mei, Wenpeng Yin

To our knowledge, this is the first work that systematically studies how robust a PLM is when it is supervised by instructions with different factors of variability.

Language Modelling

Learning to Select from Multiple Options

1 code implementation1 Dec 2022 Jiangshu Du, Wenpeng Yin, Congying Xia, Philip S. Yu

To deal with the two issues, this work first proposes a contextualized TE model (Context-TE) by appending other k options as the context of the current (P, H) modeling.

Entity Typing Intent Detection +2

OpenStance: Real-world Zero-shot Stance Detection

1 code implementation25 Oct 2022 Hanzi Xu, Slobodan Vucetic, Wenpeng Yin

To our knowledge, this is the first work that studies stance detection under the open-domain zero-shot setting.

Domain Generalization Natural Language Inference +1

ConTinTin: Continual Learning from Task Instructions

no code implementations ACL 2022 Wenpeng Yin, Jia Li, Caiming Xiong

This work defines a new learning paradigm ConTinTin (Continual Learning from Task Instructions), in which a system should learn a sequence of new tasks one by one, each task is explained by a piece of textual instruction.

Continual Learning

Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference

1 code implementation12 Feb 2022 Bangzheng Li, Wenpeng Yin, Muhao Chen

The task of ultra-fine entity typing (UFET) seeks to predict diverse and free-form words or phrases that describe the appropriate types of entities mentioned in sentences.

Entity Typing Learning-To-Rank +2

Event Linking: Grounding Event Mentions to Wikipedia

1 code implementation15 Dec 2021 Xiaodong Yu, Wenpeng Yin, Nitish Gupta, Dan Roth

Third, we retrain and evaluate two state-of-the-art (SOTA) entity linking models, showing the challenges of event linking, and we propose an event-specific linking system EVELINK to set a competitive result for the new task.

Entity Linking Natural Language Understanding

Incremental Few-shot Text Classification with Multi-round New Classes: Formulation, Dataset and System

1 code implementation NAACL 2021 Congying Xia, Wenpeng Yin, Yihao Feng, Philip Yu

Two major challenges exist in this new task: (i) For the learning process, the system should incrementally learn new classes round by round without re-training on the examples of preceding classes; (ii) For the performance, the system should perform well on new classes without much loss on preceding classes.

Few-Shot Text Classification General Classification +4

Learning to Synthesize Data for Semantic Parsing

1 code implementation NAACL 2021 Bailin Wang, Wenpeng Yin, Xi Victoria Lin, Caiming Xiong

Moreover, explicitly modeling compositions using PCFG leads to a better exploration of unseen programs, thus generate more diverse data.

Domain Generalization Semantic Parsing +3

Universal Natural Language Processing with Limited Annotations: Try Few-shot Textual Entailment as a Start

1 code implementation EMNLP 2020 Wenpeng Yin, Nazneen Fatema Rajani, Dragomir Radev, Richard Socher, Caiming Xiong

We demonstrate that this framework enables a pretrained entailment model to work well on new entailment domains in a few-shot setting, and show its effectiveness as a unified solver for several downstream NLP tasks such as question answering and coreference resolution when the end-task annotations are limited.

coreference-resolution Natural Language Inference +1

Mixup-Transformer: Dynamic Data Augmentation for NLP Tasks

no code implementations COLING 2020 Lichao Sun, Congying Xia, Wenpeng Yin, TingTing Liang, Philip S. Yu, Lifang He

Our studies show that mixup is a domain-independent data augmentation technique to pre-trained language models, resulting in significant performance improvement for transformer-based models.

Data Augmentation Image Classification

Meta-learning for Few-shot Natural Language Processing: A Survey

no code implementations19 Jul 2020 Wenpeng Yin

If the target task itself cannot provide more information, how about collecting more tasks equipped with rich annotations to help the model learning?

Meta-Learning

CO-Search: COVID-19 Information Retrieval with Semantic Search, Question Answering, and Abstractive Summarization

no code implementations17 Jun 2020 Andre Esteva, Anuprit Kale, Romain Paulus, Kazuma Hashimoto, Wenpeng Yin, Dragomir Radev, Richard Socher

The COVID-19 global pandemic has resulted in international efforts to understand, track, and mitigate the disease, yielding a significant corpus of COVID-19 and SARS-CoV-2-related publications across scientific disciplines.

Abstractive Text Summarization Information Retrieval +3

Adv-BERT: BERT is not robust on misspellings! Generating nature adversarial samples on BERT

no code implementations27 Feb 2020 Lichao Sun, Kazuma Hashimoto, Wenpeng Yin, Akari Asai, Jia Li, Philip Yu, Caiming Xiong

There is an increasing amount of literature that claims the brittleness of deep neural networks in dealing with adversarial examples that are created maliciously.

Question Answering Sentence +1

Benchmarking Zero-shot Text Classification: Datasets, Evaluation and Entailment Approach

4 code implementations IJCNLP 2019 Wenpeng Yin, Jamaal Hay, Dan Roth

0Shot-TC aims to associate an appropriate label with a piece of text, irrespective of the text domain and the aspect (e. g., topic, emotion, event, etc.)

Benchmarking General Classification +3

Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing

no code implementations16 Aug 2019 Jianquan Li, Xiaokang Liu, Wenpeng Yin, Min Yang, Liqun Ma, Yaohong Jin

Multi-Task Learning (MTL) aims at boosting the overall performance of each individual task by leveraging useful information contained in multiple related tasks.

Multi-Task Learning

TwoWingOS: A Two-Wing Optimization Strategy for Evidential Claim Verification

1 code implementation EMNLP 2018 Wenpeng Yin, Dan Roth

We develop TwoWingOS (two-wing optimization strategy), a system that, while identifying appropriate evidence for a claim, also determines whether or not the claim is supported by the evidence.

Claim Verification Natural Language Inference +1

Term Definitions Help Hypernymy Detection

no code implementations SEMEVAL 2018 Wenpeng Yin, Dan Roth

Existing methods of hypernymy detection mainly rely on statistics over a big corpus, either mining some co-occurring patterns like "animals such as cats" or embedding words of interest into context-aware vectors.

Attentive Convolution: Equipping CNNs with RNN-style Attention Mechanisms

1 code implementation TACL 2018 Wenpeng Yin, Hinrich Schütze

We hypothesize that this is because the attention in CNNs has been mainly implemented as attentive pooling (i. e., it is applied to pooling) rather than as attentive convolution (i. e., it is integrated into convolution).

Claim Verification Natural Language Inference +3

Comparative Study of CNN and RNN for Natural Language Processing

4 code implementations7 Feb 2017 Wenpeng Yin, Katharina Kann, Mo Yu, Hinrich Schütze

Deep neural networks (DNN) have revolutionized the field of natural language processing (NLP).

Position

Task-Specific Attentive Pooling of Phrase Alignments Contributes to Sentence Matching

no code implementations EACL 2017 Wenpeng Yin, Hinrich Schütze

This work studies comparatively two typical sentence matching tasks: textual entailment (TE) and answer selection (AS), observing that weaker phrase alignments are more critical in TE, while stronger phrase alignments deserve more attention in AS.

Answer Selection Natural Language Inference +2

Simple Question Answering by Attentive Convolutional Neural Network

no code implementations COLING 2016 Wenpeng Yin, Mo Yu, Bing Xiang, Bo-Wen Zhou, Hinrich Schütze

In fact selection, we match the subject entity in a fact candidate with the entity mention in the question by a character-level convolutional neural network (char-CNN), and match the predicate in that fact with the question by a word-level CNN (word-CNN).

Entity Linking Fact Selection +1

Why and How to Pay Different Attention to Phrase Alignments of Different Intensities

no code implementations23 Apr 2016 Wenpeng Yin, Hinrich Schütze

We address the problems of identifying phrase alignments of flexible granularity and pooling alignments of different intensities for these tasks.

Answer Selection Natural Language Inference +3

Discriminative Phrase Embedding for Paraphrase Identification

no code implementations HLT 2015 Wenpeng Yin, Hinrich Schütze

This work, concerning paraphrase identification task, on one hand contributes to expanding deep learning embeddings to include continuous and discontinuous linguistic phrases.

Paraphrase Identification

Online Updating of Word Representations for Part-of-Speech Tagging

no code implementations EMNLP 2015 Wenpeng Yin, Tobias Schnabel, Hinrich Schütze

We propose online unsupervised domain adaptation (DA), which is performed incrementally as data comes in and is applicable when batch DA is not possible.

Online unsupervised domain adaptation Part-Of-Speech Tagging +2

ABCNN: Attention-Based Convolutional Neural Network for Modeling Sentence Pairs

8 code implementations TACL 2016 Wenpeng Yin, Hinrich Schütze, Bing Xiang, Bo-Wen Zhou

(ii) We propose three attention schemes that integrate mutual influence between sentences into CNN; thus, the representation of each sentence takes into consideration its counterpart.

Answer Selection Natural Language Inference +2

Learning Meta-Embeddings by Using Ensembles of Embedding Sets

1 code implementation18 Aug 2015 Wenpeng Yin, Hinrich Schütze

Word embeddings -- distributed representations of words -- in deep learning are beneficial for many tasks in natural language processing (NLP).

Part-Of-Speech Tagging Word Embeddings +1

Deep Learning Embeddings for Discontinuous Linguistic Units

no code implementations18 Dec 2013 Wenpeng Yin, Hinrich Schütze

Deep learning embeddings have been successfully used for many natural language processing problems.

coreference-resolution

Cannot find the paper you are looking for? You can Submit a new open access paper.