Search Results for author: PengFei Liu

Found 81 papers, 41 papers with code

Multi-task Learning with Gradient Communication

no code implementations ICLR 2019 Pengfei Liu, Xuanjing Huang

In this paper, we describe a general framework to systematically analyze current neural models for multi-task learning, in which we find that existing models expect to disentangle features into different spaces while features learned in practice are still entangled in shared space, leaving potential hazards for other training or unseen tasks.

Inductive Bias Multi-Task Learning

Are Factuality Checkers Reliable? Adversarial Meta-evaluation of Factuality in Summarization

1 code implementation Findings (EMNLP) 2021 Yiran Chen, PengFei Liu, Xipeng Qiu

In this paper, we present an adversarial meta-evaluation methodology that allows us to (i) diagnose the fine-grained strengths and weaknesses of 6 existing top-performing metrics over 24 diagnostic test datasets, (ii) search for directions for further improvement by data augmentation.

Data Augmentation

GPTScore: Evaluate as You Desire

1 code implementation8 Feb 2023 Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, PengFei Liu

Generative Artificial Intelligence (AI) has enabled the development of sophisticated models that are capable of producing high-caliber text, images, and other outputs through the utilization of large pre-trained models.

Text Generation

a cognitive frequency allocation strategy for multi-carrier radar against communication interference

no code implementations23 Dec 2022 Zhao Shan, Lei Wang, PengFei Liu, Tianyao Huang, Yimin Liu

To address this challenge, we use a novel iteratively selecting technique which breaks a difficult decision task into several easy tasks.

Revisiting the Gold Standard: Grounding Summarization Evaluation with Robust Human Evaluation

1 code implementation15 Dec 2022 Yixin Liu, Alexander R. Fabbri, PengFei Liu, Yilun Zhao, Linyong Nan, Ruilin Han, Simeng Han, Shafiq Joty, Chien-Sheng Wu, Caiming Xiong, Dragomir Radev

4) We evaluate existing automatic metrics using the collected human annotations across evaluation protocols and demonstrate how our benchmark leads to more statistically stable and significant results.

T5Score: Discriminative Fine-tuning of Generative Evaluation Metrics

1 code implementation12 Dec 2022 Yiwei Qin, Weizhe Yuan, Graham Neubig, PengFei Liu

Both have their advantages; discriminative metrics are able to directly optimize for the problem of distinguishing between good and bad outputs, while generative metrics can be trained using abundant raw text.

Searching for Effective Multilingual Fine-Tuning Methods: A Case Study in Summarization

no code implementations12 Dec 2022 Yiwei Qin, Graham Neubig, PengFei Liu

Recently, a large number of tuning strategies have been proposed to adapt pre-trained language models to downstream tasks.

Text Summarization

PAL: Program-aided Language Models

no code implementations18 Nov 2022 Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, PengFei Liu, Yiming Yang, Jamie Callan, Graham Neubig

Much of this success can be attributed to prompting methods such as "chain-of-thought'', which employ LLMs for both understanding the problem description by decomposing it into steps, as well as solving each step of the problem.

GSM8K Mathematical Reasoning

Towards a Unified Multi-Dimensional Evaluator for Text Generation

1 code implementation13 Oct 2022 Ming Zhong, Yang Liu, Da Yin, Yuning Mao, Yizhu Jiao, PengFei Liu, Chenguang Zhu, Heng Ji, Jiawei Han

We re-frame NLG evaluation as a Boolean Question Answering (QA) task, and by guiding the model with different questions, we can use one evaluator to evaluate from multiple dimensions.

Question Answering Response Generation +3

Artificial Neural Networks for Finger Vein Recognition: A Survey

no code implementations29 Aug 2022 Yimin Yin, Renye Zhang, PengFei Liu, Wanxia Deng, Siliang He, Chen Li, Jinghua Zhang

To our best knowledge, this paper is the first comprehensive survey focusing on finger vein recognition based on artificial neural networks.

Feature Engineering Finger Vein Recognition

reStructured Pre-training

1 code implementation22 Jun 2022 Weizhe Yuan, PengFei Liu

In addition, we test our model in the 2022 College Entrance Examination English that happened a few days ago (2022. 06. 08), and it gets a total score of 134 (v. s.

Polyglot Prompt: Multilingual Multitask PrompTraining

1 code implementation29 Apr 2022 Jinlan Fu, See-Kiong Ng, PengFei Liu

This paper aims for a potential architectural improvement for multilingual learning and asks: Can different tasks from different languages be modeled in a monolithic framework, i. e. without any task/language-specific module?

named-entity-recognition Named Entity Recognition +7

BRIO: Bringing Order to Abstractive Summarization

2 code implementations ACL 2022 Yixin Liu, PengFei Liu, Dragomir Radev, Graham Neubig

Abstractive summarization models are commonly trained using maximum likelihood estimation, which assumes a deterministic (one-point) target distribution in which an ideal model will assign all the probability mass to the reference summary.

Abstractive Text Summarization

DataLab: A Platform for Data Analysis and Intervention

no code implementations ACL 2022 Yang Xiao, Jinlan Fu, Weizhe Yuan, Vijay Viswanathan, Zhoumianze Liu, Yixin Liu, Graham Neubig, PengFei Liu

Despite data's crucial role in machine learning, most existing tools and research tend to focus on systems on top of existing data rather than how to interpret and manipulate data.

The MSXF TTS System for ICASSP 2022 ADD Challenge

no code implementations27 Jan 2022 Chunyong Yang, PengFei Liu, Yanli Chen, Hongbin Wang, Min Liu

The end to end TTS system is VITS, and the pre-training self-supervised model is wav2vec 2. 0.

Group Gated Fusion on Attention-based Bidirectional Alignment for Multimodal Emotion Recognition

1 code implementation17 Jan 2022 PengFei Liu, Kun Li, Helen Meng

Emotion recognition is a challenging and actively-studied research area that plays a critical role in emotion-aware human-computer interaction systems.

Multimodal Emotion Recognition

Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing

1 code implementation28 Jul 2021 PengFei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, Graham Neubig

This paper surveys and organizes research works in a new paradigm in natural language processing, which we dub "prompt-based learning".

Language Modelling Zero-Shot Learning

BARTScore: Evaluating Generated Text as Text Generation

1 code implementation NeurIPS 2021 Weizhe Yuan, Graham Neubig, PengFei Liu

In this work, we conceptualize the evaluation of generated text as a text generation problem, modeled using pre-trained sequence-to-sequence models.

Informativeness Machine Translation +3

How well do you know your summarization datasets?

1 code implementation Findings (ACL) 2021 Priyam Tejaswin, Dhruv Naik, PengFei Liu

(2) The performance of models and reliability of metrics is dependent on sample complexity.

CitationIE: Leveraging the Citation Graph for Scientific Information Extraction

1 code implementation ACL 2021 Vijay Viswanathan, Graham Neubig, PengFei Liu

Automatically extracting key information from scientific documents has the potential to help scientists work more efficiently and accelerate the pace of scientific progress.

SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization

2 code implementations ACL 2021 Yixin Liu, PengFei Liu

In this paper, we present a conceptually simple while empirically powerful framework for abstractive summarization, SimCLS, which can bridge the gap between the learning objective and evaluation metrics resulting from the currently dominated sequence-to-sequence learning framework by formulating text generation as a reference-free evaluation problem (i. e., quality estimation) assisted by contrastive learning.

Abstractive Text Summarization Contrastive Learning +1

SpanNER: Named Entity Re-/Recognition as Span Prediction

1 code implementation ACL 2021 Jinlan Fu, Xuanjing Huang, PengFei Liu

Recent years have seen the paradigm shift of Named Entity Recognition (NER) systems from sequence labeling to span prediction.

named-entity-recognition Named Entity Recognition +1

Out-of-Scope Domain and Intent Classification through Hierarchical Joint Modeling

1 code implementation30 Apr 2021 PengFei Liu, Kun Li, Helen Meng

User queries for a real-world dialog system may sometimes fall outside the scope of the system's capabilities, but appropriate system responses will enable smooth processing throughout the human-computer interaction.

Classification General Classification +3

Open Intent Discovery through Unsupervised Semantic Clustering and Dependency Parsing

1 code implementation25 Apr 2021 PengFei Liu, Youzhang Ning, King Keung Wu, Kun Li, Helen Meng

This paper presents an unsupervised two-stage approach to discover intents and generate meaningful intent labels automatically from a collection of unlabeled utterances in a domain.

Dependency Parsing Intent Discovery +2

RefSum: Refactoring Neural Summarization

1 code implementation NAACL 2021 Yixin Liu, Zi-Yi Dou, PengFei Liu

Although some recent works show potential complementarity among different state-of-the-art systems, few works try to investigate this problem in text summarization.

Text Summarization

ExplainaBoard: An Explainable Leaderboard for NLP

1 code implementation ACL 2021 PengFei Liu, Jinlan Fu, Yang Xiao, Weizhe Yuan, Shuaicheng Chang, Junqi Dai, Yixin Liu, Zihuiwen Ye, Zi-Yi Dou, Graham Neubig

In this paper, we present a new conceptualization and implementation of NLP evaluation: the ExplainaBoard, which in addition to inheriting the functionality of the standard leaderboard, also allows researchers to (i) diagnose strengths and weaknesses of a single system (e. g.~what is the best-performing system bad at?)

Machine Translation

Does syntax matter? A strong baseline for Aspect-based Sentiment Analysis with RoBERTa

1 code implementation NAACL 2021 Junqi Dai, Hang Yan, Tianxiang Sun, PengFei Liu, Xipeng Qiu

In this paper, we firstly compare the induced trees from PTMs and the dependency parsing trees on several popular models for the ABSA task, showing that the induced tree from fine-tuned RoBERTa (FT-RoBERTa) outperforms the parser-provided tree.

Aspect-Based Sentiment Analysis (ABSA) Dependency Parsing

Larger-Context Tagging: When and Why Does It Work?

no code implementations NAACL 2021 Jinlan Fu, Liangjing Feng, Qi Zhang, Xuanjing Huang, PengFei Liu

The development of neural networks and pretraining techniques has spawned many sentence-level tagging systems that achieved superior performance on typical benchmarks.

Towards More Fine-grained and Reliable NLP Performance Prediction

1 code implementation EACL 2021 Zihuiwen Ye, PengFei Liu, Jinlan Fu, Graham Neubig

We perform an analysis of four types of NLP tasks, and both demonstrate the feasibility of fine-grained performance prediction and the necessity to perform reliability analysis for performance prediction methods in the future.

Can We Automate Scientific Reviewing?

1 code implementation30 Jan 2021 Weizhe Yuan, PengFei Liu, Graham Neubig

The rapid development of science and technology has been accompanied by an exponential growth in peer-reviewed scientific publications.

Review Generation

Polymorphous density-functional description of paramagnetic phases of quantum magnets

no code implementations7 Jan 2021 Yufei Zhao, Qiushi Yao, PengFei Liu, Jingzhi Han, Zhi Wang, Qihang Liu

The kernel of the study of magnetic quantum materials focuses on the magnetic phase transitions, among which the most common phenomenon is the transition between low-temperature magnetic-ordered phase to high-temperature paramagnetic phase.

Materials Science

Interpretable Multi-dataset Evaluation for Named Entity Recognition

2 code implementations EMNLP 2020 Jinlan Fu, PengFei Liu, Graham Neubig

With the proliferation of models for natural language processing tasks, it is even harder to understand the differences between models and their relative merits.

named-entity-recognition Named Entity Recognition +1

RethinkCWS: Is Chinese Word Segmentation a Solved Task?

1 code implementation EMNLP 2020 Jinlan Fu, PengFei Liu, Qi Zhang, Xuanjing Huang

The performance of the Chinese Word Segmentation (CWS) systems has gradually reached a plateau with the rapid development of deep neural networks, especially the successful use of large pre-trained models.

Chinese Word Segmentation

GSum: A General Framework for Guided Neural Abstractive Summarization

1 code implementation NAACL 2021 Zi-Yi Dou, PengFei Liu, Hiroaki Hayashi, Zhengbao Jiang, Graham Neubig

Neural abstractive summarization models are flexible and can produce coherent summaries, but they are sometimes unfaithful and can be difficult to control.

Abstractive Text Summarization

Re-evaluating Evaluation in Text Summarization

1 code implementation EMNLP 2020 Manik Bhandari, Pranav Gour, Atabak Ashfaq, PengFei Liu, Graham Neubig

Automated evaluation metrics as a stand-in for manual evaluation are an essential part of the development of text-generation tasks such as text summarization.

Text Generation Text Summarization

CDEvalSumm: An Empirical Study of Cross-Dataset Evaluation for Neural Summarization Systems

2 code implementations Findings of the Association for Computational Linguistics 2020 Yiran Chen, PengFei Liu, Ming Zhong, Zi-Yi Dou, Danqing Wang, Xipeng Qiu, Xuanjing Huang

In this paper, we perform an in-depth analysis of characteristics of different datasets and investigate the performance of different summarization models under a cross-dataset setting, in which a summarizer trained on one corpus will be evaluated on a range of out-of-domain corpora.

Text Summarization

Heterogeneous Graph Neural Networks for Extractive Document Summarization

1 code implementation ACL 2020 Danqing Wang, PengFei Liu, Yining Zheng, Xipeng Qiu, Xuanjing Huang

An intuitive way is to put them in the graph-based neural network, which has a more complex structure for capturing inter-sentence relationships.

Document Summarization Extractive Document Summarization +2

Robust Covariance Estimation for High-dimensional Compositional Data with Application to Microbial Communities Analysis

1 code implementation20 Apr 2020 Yong He, PengFei Liu, Xinsheng Zhang, Wang Zhou

We construct a Median-of-Means (MOM) estimator for the centered log-ratio covariance matrix and propose a thresholding procedure that is adaptive to the variability of individual entries.

Methodology

Rethinking Generalization of Neural Models: A Named Entity Recognition Case Study

1 code implementation12 Jan 2020 Jinlan Fu, PengFei Liu, Qi Zhang, Xuanjing Huang

While neural network-based models have achieved impressive performance on a large body of NLP tasks, the generalization behavior of different models remains poorly understood: Does this excellent performance imply a perfect generalization model, or are there still some limitations?

named-entity-recognition Named Entity Recognition +1

RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving

2 code implementations ECCV 2020 Peixuan Li, Huaici Zhao, PengFei Liu, Feidao Cao

Different from these approaches, our method predicts the nine perspective keypoints of a 3D bounding box in image space, and then utilize the geometric relationship of 3D and 2D perspectives to recover the dimension, location, and orientation in 3D space.

Autonomous Driving Vehicle Pose Estimation

Learning Sparse Sharing Architectures for Multiple Tasks

1 code implementation12 Nov 2019 Tianxiang Sun, Yunfan Shao, Xiaonan Li, PengFei Liu, Hang Yan, Xipeng Qiu, Xuanjing Huang

Most existing deep multi-task learning models are based on parameter sharing, such as hard sharing, hierarchical sharing, and soft sharing.

Multi-Task Learning

A Closer Look at Data Bias in Neural Extractive Summarization Models

no code implementations WS 2019 Ming Zhong, Danqing Wang, PengFei Liu, Xipeng Qiu, Xuanjing Huang

In this paper, we take stock of the current state of summarization datasets and explore how different factors of datasets influence the generalization behaviour of neural extractive summarization models.

Extractive Summarization

A TWO-STAGE FRAMEWORK FOR MATHEMATICAL EXPRESSION RECOGNITION

no code implementations25 Sep 2019 Jin Zhang, Weipeng Ming, PengFei Liu

In the first stage, this method locates and recognizes the math symbols of input image by object detection algorithm.

object-detection Object Detection

Towards Interpretable Evaluations: A Case Study of Named Entity Recognition

no code implementations25 Sep 2019 Jinlan Fu, PengFei Liu, Xuanjing Huang

With the proliferation of models for natural language processing (NLP) tasks, it is even harder to understand the differences between models and their relative merits.

named-entity-recognition Named Entity Recognition +1

Exploring Domain Shift in Extractive Text Summarization

no code implementations30 Aug 2019 Danqing Wang, PengFei Liu, Ming Zhong, Jie Fu, Xipeng Qiu, Xuanjing Huang

Although domain shift has been well explored in many NLP applications, it still has received little attention in the domain of extractive text summarization.

Extractive Text Summarization Meta-Learning

Zero-shot Text-to-SQL Learning with Auxiliary Task

1 code implementation29 Aug 2019 Shuaichen Chang, PengFei Liu, Yun Tang, Jing Huang, Xiaodong He, Bo-Wen Zhou

Recent years have seen great success in the use of neural seq2seq models on the text-to-SQL task.

Text-To-SQL

DropAttention: A Regularization Method for Fully-Connected Self-Attention Networks

no code implementations25 Jul 2019 Lin Zehui, PengFei Liu, Luyao Huang, Junkun Chen, Xipeng Qiu, Xuanjing Huang

Variants dropout methods have been designed for the fully-connected layer, convolutional layer and recurrent layer in neural networks, and shown to be effective to avoid overfitting.

TIGS: An Inference Algorithm for Text Infilling with Gradient Search

1 code implementation ACL 2019 Dayiheng Liu, Jie Fu, PengFei Liu, Jiancheng Lv

Text infilling is defined as a task for filling in the missing part of a sentence or paragraph, which is suitable for many real-world natural language generation scenarios.

Text Infilling

Cognitive Radar Using Reinforcement Learning in Automotive Applications

no code implementations24 Apr 2019 PengFei Liu, Yimin Liu, Tianyao Huang, Yuxiang Lu, Xiqin Wang

The concept of cognitive radar (CR) enables radar systems to achieve intelligent adaption to a changeable environment with feedback facility from receiver to transmitter.

reinforcement-learning Reinforcement Learning (RL)

Star-Transformer

2 code implementations NAACL 2019 Qipeng Guo, Xipeng Qiu, PengFei Liu, Yunfan Shao, xiangyang xue, Zheng Zhang

Although Transformer has achieved great successes on many NLP tasks, its heavy structure with fully-connected attention connections leads to dependencies on large training data.

Named Entity Recognition (NER) Natural Language Inference +2

Drug cell line interaction prediction

1 code implementation28 Dec 2018 Pengfei Liu

Understanding the phenotypic drug response on cancer cell lines plays a vital rule in anti-cancer drug discovery and re-purposing.

Drug Discovery

Multi-task Learning over Graph Structures

no code implementations26 Nov 2018 Pengfei Liu, Jie Fu, Yue Dong, Xipeng Qiu, Jackie Chi Kit Cheung

We present two architectures for multi-task learning with neural sequence models.

General Classification Multi-Task Learning +2

Contextualized Non-local Neural Networks for Sequence Learning

no code implementations21 Nov 2018 Pengfei Liu, Shuaichen Chang, Xuanjing Huang, Jian Tang, Jackie Chi Kit Cheung

Recently, a large number of neural mechanisms and models have been proposed for sequence learning, of which self-attention, as exemplified by the Transformer model, and graph neural networks (GNNs) have attracted much attention.

General Classification text-classification +1

Meta-Learning Multi-task Communication

no code implementations23 Oct 2018 Pengfei Liu, Xuanjing Huang

In this paper, we describe a general framework: Parameters Read-Write Networks (PRaWNs) to systematically analyze current neural models for multi-task learning, in which we find that existing models expect to disentangle features into different spaces while features learned in practice are still entangled in shared space, leaving potential hazards for other training or unseen tasks.

Inductive Bias Meta-Learning +1

Meta Multi-Task Learning for Sequence Modeling

no code implementations25 Feb 2018 Junkun Chen, Xipeng Qiu, Pengfei Liu, Xuanjing Huang

Specifically, we use a shared meta-network to capture the meta-knowledge of semantic composition and generate the parameters of the task-specific semantic composition models.

Multi-Task Learning Representation Learning +3

Idiom-Aware Compositional Distributed Semantics

no code implementations EMNLP 2017 Pengfei Liu, Kaiyu Qian, Xipeng Qiu, Xuanjing Huang

Idioms are peculiar linguistic constructions that impose great challenges for representing the semantics of language, especially in current prevailing end-to-end neural models, which assume that the semantics of a phrase or sentence can be literally composed from its constitutive words.

General Classification Machine Translation +3

Dynamic Compositional Neural Networks over Tree Structure

no code implementations11 May 2017 Pengfei Liu, Xipeng Qiu, Xuanjing Huang

Tree-structured neural networks have proven to be effective in learning semantic representations by exploiting syntactic information.

Learning Semantic Representations

Adversarial Multi-task Learning for Text Classification

no code implementations ACL 2017 Pengfei Liu, Xipeng Qiu, Xuanjing Huang

Neural network models have shown their promising opportunities for multi-task learning, which focus on learning the shared layers to extract the common and task-invariant features.

General Classification Multi-Task Learning +2

Deep Multi-Task Learning with Shared Memory

no code implementations23 Sep 2016 Pengfei Liu, Xipeng Qiu, Xuanjing Huang

Neural network based models have achieved impressive results on various specific tasks.

General Classification Multi-Task Learning +2

Syntax-based Attention Model for Natural Language Inference

no code implementations22 Jul 2016 PengFei Liu, Xipeng Qiu, Xuanjing Huang

Introducing attentional mechanism in neural network is a powerful concept, and has achieved impressive results in many natural language processing tasks.

Natural Language Inference

Modelling Interaction of Sentence Pair with coupled-LSTMs

no code implementations EMNLP 2016 Pengfei Liu, Xipeng Qiu, Xuanjing Huang

Recently, there is rising interest in modelling the interactions of two sentences with deep neural networks.

Cannot find the paper you are looking for? You can Submit a new open access paper.