Search Results for author: Peiyi Wang

Found 36 papers, 24 papers with code

Utilizing Local Hierarchy with Adversarial Training for Hierarchical Text Classification

1 code implementation29 Feb 2024 Zihan Wang, Peiyi Wang, Houfeng Wang

Hierarchical text classification (HTC) is a challenging subtask of multi-label classification due to its complex taxonomic structure.

Multi-Label Classification text-classification +1

Reducing Hallucinations in Entity Abstract Summarization with Facts-Template Decomposition

1 code implementation29 Feb 2024 Fangwei Zhu, Peiyi Wang, Zhifang Sui

Entity abstract summarization aims to generate a coherent description of a given entity based on a set of relevant Internet documents.

PeriodicLoRA: Breaking the Low-Rank Bottleneck in LoRA Optimization

no code implementations25 Feb 2024 Xiangdi Meng, Damai Dai, Weiyao Luo, Zhe Yang, Shaoxiang Wu, Xiaochen Wang, Peiyi Wang, Qingxiu Dong, Liang Chen, Zhifang Sui

Although LoRA fine-tuning is effective, there is still a performance gap compared to full fine-tuning, since its weight update is limited to low-rank matrices.

PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain

1 code implementation21 Feb 2024 Liang Chen, Yichi Zhang, Shuhuai Ren, Haozhe Zhao, Zefan Cai, Yuchi Wang, Peiyi Wang, Xiangdi Meng, Tianyu Liu, Baobao Chang

To address this, we introduce Embodied-Instruction-Evolution (EIE), an automatic framework for synthesizing instruction tuning examples in multimodal embodied environments.

Autonomous Driving Decision Making

ICDPO: Effectively Borrowing Alignment Capability of Others via In-context Direct Preference Optimization

1 code implementation14 Feb 2024 Feifan Song, Yuxuan Fan, Xin Zhang, Peiyi Wang, Houfeng Wang

Large Language Models (LLMs) rely on Human Preference Alignment (HPA) to ensure the generation of safe content.

In-Context Learning

DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models

1 code implementation5 Feb 2024 Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Mingchuan Zhang, Y. K. Li, Y. Wu, Daya Guo

Mathematical reasoning poses a significant challenge for language models due to its complex and structured nature.

Ranked #11 on Math Word Problem Solving on MATH (using extra training data)

Arithmetic Reasoning Math +1

Unlocking Efficiency in Large Language Model Inference: A Comprehensive Survey of Speculative Decoding

1 code implementation15 Jan 2024 Heming Xia, Zhe Yang, Qingxiu Dong, Peiyi Wang, Yongqi Li, Tao Ge, Tianyu Liu, Wenjie Li, Zhifang Sui

To mitigate the high inference latency stemming from autoregressive decoding in Large Language Models (LLMs), Speculative Decoding has emerged as a novel decoding paradigm for LLM inference.

Language Modelling Large Language Model

Silkie: Preference Distillation for Large Visual Language Models

no code implementations17 Dec 2023 Lei LI, Zhihui Xie, Mukai Li, Shunian Chen, Peiyi Wang, Liang Chen, Yazheng Yang, Benyou Wang, Lingpeng Kong

This paper explores preference distillation for large vision language models (LVLMs), improving their ability to generate helpful and faithful responses anchoring the visual context.

Hallucination Visual Question Answering

Math-Shepherd: Verify and Reinforce LLMs Step-by-step without Human Annotations

1 code implementation14 Dec 2023 Peiyi Wang, Lei LI, Zhihong Shao, R. X. Xu, Damai Dai, Yifei Li, Deli Chen, Y. Wu, Zhifang Sui

In this paper, we present an innovative process-oriented math process reward model called \textbf{Math-Shepherd}, which assigns a reward score to each step of math problem solutions.

Ranked #13 on Arithmetic Reasoning on GSM8K (using extra training data)

Arithmetic Reasoning GSM8K +2

Guiding AMR Parsing with Reverse Graph Linearization

1 code implementation13 Oct 2023 Bofei Gao, Liang Chen, Peiyi Wang, Zhifang Sui, Baobao Chang

Abstract Meaning Representation (AMR) parsing aims to extract an abstract semantic graph from a given sentence.

AMR Parsing Sentence

Not All Demonstration Examples are Equally Beneficial: Reweighting Demonstration Examples for In-Context Learning

1 code implementation12 Oct 2023 Zhe Yang, Damai Dai, Peiyi Wang, Zhifang Sui

To assess the quality of weights in the absence of additional validation data, we design a masked self-prediction (MSP) score that exhibits a strong correlation with the final ICL performance.

In-Context Learning text-classification +1

Rationale-Enhanced Language Models are Better Continual Relation Learners

1 code implementation10 Oct 2023 Weimin Xiong, YiFan Song, Peiyi Wang, Sujian Li

Continual relation extraction (CRE) aims to solve the problem of catastrophic forgetting when learning a sequence of newly emerging relations.

Continual Relation Extraction Relation +1

Making Large Language Models Better Reasoners with Alignment

no code implementations5 Sep 2023 Peiyi Wang, Lei LI, Liang Chen, Feifan Song, Binghuai Lin, Yunbo Cao, Tianyu Liu, Zhifang Sui

To address this problem, we introduce an \textit{Alignment Fine-Tuning (AFT)} paradigm, which involves three steps: 1) fine-tuning LLMs with COT training data; 2) generating multiple COT responses for each question, and categorizing them into positive and negative ones based on whether they achieve the correct answer; 3) calibrating the scores of positive and negative responses given by LLMs with a novel constraint alignment loss.

M$^3$IT: A Large-Scale Dataset towards Multi-Modal Multilingual Instruction Tuning

no code implementations7 Jun 2023 Lei LI, Yuwei Yin, Shicheng Li, Liang Chen, Peiyi Wang, Shuhuai Ren, Mukai Li, Yazheng Yang, Jingjing Xu, Xu sun, Lingpeng Kong, Qi Liu

To tackle this challenge and promote research in the vision-language field, we introduce the Multi-Modal, Multilingual Instruction Tuning (M$^3$IT) dataset, designed to optimize VLM alignment with human instructions.

World Knowledge

Large Language Models are not Fair Evaluators

2 code implementations29 May 2023 Peiyi Wang, Lei LI, Liang Chen, Zefan Cai, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, Zhifang Sui

In this paper, we uncover a systematic bias in the evaluation paradigm of adopting large language models~(LLMs), e. g., GPT-4, as a referee to score and compare the quality of responses generated by candidate models.

Language Modelling Large Language Model +1

RepCL: Exploring Effective Representation for Continual Text Classification

no code implementations12 May 2023 YiFan Song, Peiyi Wang, Dawei Zhu, Tianyu Liu, Zhifang Sui, Sujian Li

Continual learning (CL) aims to constantly learn new knowledge over time while avoiding catastrophic forgetting on old tasks.

Continual Learning Representation Learning +2

Enhancing Continual Relation Extraction via Classifier Decomposition

1 code implementation8 May 2023 Heming Xia, Peiyi Wang, Tianyu Liu, Binghuai Lin, Yunbo Cao, Zhifang Sui

In this work, we point out that there exist two typical biases after training of this vanilla strategy: classifier bias and representation bias, which causes the previous knowledge that the model learned to be shaded.

Continual Relation Extraction Relation

Learning Robust Representations for Continual Relation Extraction via Adversarial Class Augmentation

1 code implementation10 Oct 2022 Peiyi Wang, YiFan Song, Tianyu Liu, Binghuai Lin, Yunbo Cao, Sujian Li, Zhifang Sui

In this paper, through empirical studies we argue that this assumption may not hold, and an important reason for catastrophic forgetting is that the learned representations do not have good robustness against the appearance of analogous relations in the subsequent learning process.

Continual Relation Extraction Relation

A Two-Stream AMR-enhanced Model for Document-level Event Argument Extraction

1 code implementation NAACL 2022 Runxin Xu, Peiyi Wang, Tianyu Liu, Shuang Zeng, Baobao Chang, Zhifang Sui

In this paper, we focus on extracting event arguments from an entire document, which mainly faces two critical problems: a) the long-distance dependency between trigger and arguments over sentences; b) the distracting context towards an event in the document.

Document-level Event Extraction Event Argument Extraction +2

HPT: Hierarchy-aware Prompt Tuning for Hierarchical Text Classification

1 code implementation28 Apr 2022 Zihan Wang, Peiyi Wang, Tianyu Liu, Binghuai Lin, Yunbo Cao, Zhifang Sui, Houfeng Wang

However, in this paradigm, there exists a huge gap between the classification tasks with sophisticated label hierarchy and the masked language model (MLM) pretraining tasks of PLMs and thus the potentials of PLMs can not be fully tapped.

Language Modelling Multi-Label Classification +2

ATP: AMRize Then Parse! Enhancing AMR Parsing with PseudoAMRs

2 code implementations Findings (NAACL) 2022 Liang Chen, Peiyi Wang, Runxin Xu, Tianyu Liu, Zhifang Sui, Baobao Chang

As Abstract Meaning Representation (AMR) implicitly involves compound semantic annotations, we hypothesize auxiliary tasks which are semantically or formally related can better enhance AMR parsing.

Ranked #7 on AMR Parsing on LDC2020T02 (using extra training data)

AMR Parsing Dependency Parsing +1

SmartSales: Sales Script Extraction and Analysis from Sales Chatlog

no code implementations19 Apr 2022 Hua Liang, Tianyu Liu, Peiyi Wang, Mengliang Rao, Yunbo Cao

2) Customer objection response assists the salespeople to figure out the typical customer objections and corresponding winning sales scripts, as well as search for proper sales responses for a certain customer objection.

Management

Speculative Decoding: Exploiting Speculative Execution for Accelerating Seq2seq Generation

2 code implementations30 Mar 2022 Heming Xia, Tao Ge, Peiyi Wang, Si-Qing Chen, Furu Wei, Zhifang Sui

We propose Speculative Decoding (SpecDec), for the first time ever, to formally study exploiting the idea of speculative execution to accelerate autoregressive (AR) decoding.

Abstractive Text Summarization Machine Translation +1

Hierarchical Curriculum Learning for AMR Parsing

1 code implementation ACL 2022 Peiyi Wang, Liang Chen, Tianyu Liu, Damai Dai, Yunbo Cao, Baobao Chang, Zhifang Sui

Abstract Meaning Representation (AMR) parsing aims to translate sentences to semantic representation with a hierarchical structure, and is recently empowered by pretrained sequence-to-sequence models.

AMR Parsing Representation Learning

An Enhanced Span-based Decomposition Method for Few-Shot Sequence Labeling

1 code implementation NAACL 2022 Peiyi Wang, Runxin Xu, Tianyu Liu, Qingyu Zhou, Yunbo Cao, Baobao Chang, Zhifang Sui

Few-Shot Sequence Labeling (FSSL) is a canonical paradigm for the tagging models, e. g., named entity recognition and slot filling, to generalize on an emerging, resource-scarce domain.

Few-shot NER Meta-Learning +4

Behind the Scenes: An Exploration of Trigger Biases Problem in Few-Shot Event Classification

1 code implementation29 Aug 2021 Peiyi Wang, Runxin Xu, Tianyu Liu, Damai Dai, Baobao Chang, Zhifang Sui

However, we find they suffer from trigger biases that signify the statistical homogeneity between some trigger words and target event types, which we summarize as trigger overlapping and trigger separability.

Explicit Interaction Network for Aspect Sentiment Triplet Extraction

no code implementations21 Jun 2021 Peiyi Wang, Tianyu Liu, Damai Dai, Runxin Xu, Baobao Chang, Zhifang Sui

Table encoder extracts sentiment at token-pair level, so that the compositional feature between targets and opinions can be easily captured.

Aspect Sentiment Triplet Extraction Sentence +1

First Target and Opinion then Polarity: Enhancing Target-opinion Correlation for Aspect Sentiment Triplet Extraction

no code implementations17 Feb 2021 Lianzhe Huang, Peiyi Wang, Sujian Li, Tianyu Liu, Xiaodong Zhang, Zhicong Cheng, Dawei Yin, Houfeng Wang

Aspect Sentiment Triplet Extraction (ASTE) aims to extract triplets from a sentence, including target entities, associated sentiment polarities, and opinion spans which rationalize the polarities.

Aspect Sentiment Triplet Extraction Sentence

Neural Review Rating Prediction with Hierarchical Attentions and Latent Factors

no code implementations29 May 2019 Xianchen Wang, Hongtao Liu, Peiyi Wang, Fangzhao Wu, Hongyan Xu, Wenjun Wang, Xing Xie

In this paper, we propose a hierarchical attention model fusing latent factor model for rating prediction with reviews, which can focus on important words and informative reviews.

Informativeness

Cannot find the paper you are looking for? You can Submit a new open access paper.