no code implementations • ICLR 2019 • Pengfei Liu, Xuanjing Huang
In this paper, we describe a general framework to systematically analyze current neural models for multi-task learning, in which we find that existing models expect to disentangle features into different spaces while features learned in practice are still entangled in shared space, leaving potential hazards for other training or unseen tasks.
1 code implementation • Findings (EMNLP) 2021 • Yiran Chen, PengFei Liu, Xipeng Qiu
In this paper, we present an adversarial meta-evaluation methodology that allows us to (i) diagnose the fine-grained strengths and weaknesses of 6 existing top-performing metrics over 24 diagnostic test datasets, (ii) search for directions for further improvement by data augmentation.
1 code implementation • 7 Mar 2023 • Yixin Liu, Alexander R. Fabbri, Yilun Zhao, PengFei Liu, Shafiq Joty, Chien-Sheng Wu, Caiming Xiong, Dragomir Radev
Interpretability and efficiency are two important considerations for the adoption of neural automatic metrics.
1 code implementation • 8 Feb 2023 • Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, PengFei Liu
Generative Artificial Intelligence (AI) has enabled the development of sophisticated models that are capable of producing high-caliber text, images, and other outputs through the utilization of large pre-trained models.
no code implementations • 23 Dec 2022 • Zhao Shan, Lei Wang, PengFei Liu, Tianyao Huang, Yimin Liu
To address this challenge, we use a novel iteratively selecting technique which breaks a difficult decision task into several easy tasks.
1 code implementation • 15 Dec 2022 • Yixin Liu, Alexander R. Fabbri, PengFei Liu, Yilun Zhao, Linyong Nan, Ruilin Han, Simeng Han, Shafiq Joty, Chien-Sheng Wu, Caiming Xiong, Dragomir Radev
4) We evaluate existing automatic metrics using the collected human annotations across evaluation protocols and demonstrate how our benchmark leads to more statistically stable and significant results.
1 code implementation • 12 Dec 2022 • Yiwei Qin, Weizhe Yuan, Graham Neubig, PengFei Liu
Both have their advantages; discriminative metrics are able to directly optimize for the problem of distinguishing between good and bad outputs, while generative metrics can be trained using abundant raw text.
no code implementations • 12 Dec 2022 • Yiwei Qin, Graham Neubig, PengFei Liu
Recently, a large number of tuning strategies have been proposed to adapt pre-trained language models to downstream tasks.
no code implementations • 18 Nov 2022 • Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, PengFei Liu, Yiming Yang, Jamie Callan, Graham Neubig
Much of this success can be attributed to prompting methods such as "chain-of-thought'', which employ LLMs for both understanding the problem description by decomposing it into steps, as well as solving each step of the problem.
1 code implementation • 13 Oct 2022 • Ming Zhong, Yang Liu, Da Yin, Yuning Mao, Yizhu Jiao, PengFei Liu, Chenguang Zhu, Heng Ji, Jiawei Han
We re-frame NLG evaluation as a Boolean Question Answering (QA) task, and by guiding the model with different questions, we can use one evaluator to evaluate from multiple dimensions.
no code implementations • 29 Aug 2022 • Yimin Yin, Renye Zhang, PengFei Liu, Wanxia Deng, Siliang He, Chen Li, Jinghua Zhang
To our best knowledge, this paper is the first comprehensive survey focusing on finger vein recognition based on artificial neural networks.
no code implementations • 23 Aug 2022 • Haris Widjaja, Kiril Gashteovski, Wiem Ben Rim, PengFei Liu, Christopher Malon, Daniel Ruffinelli, Carolin Lawrence, Graham Neubig
Knowledge Graphs (KGs) store information in the form of (head, predicate, tail)-triples.
1 code implementation • 22 Jun 2022 • Weizhe Yuan, PengFei Liu
In addition, we test our model in the 2022 College Entrance Examination English that happened a few days ago (2022. 06. 08), and it gets a total score of 134 (v. s.
1 code implementation • 22 Jun 2022 • Yiwei Ding, Wenjin Deng, Yinglin Zheng, PengFei Liu, Meihong Wang, Xuan Cheng, Jianmin Bao, Dong Chen, Ming Zeng
In this paper, we present the Intra- and Inter-Human Relation Networks (I^2R-Net) for Multi-Person Pose Estimation.
Ranked #1 on
Pose Estimation
on COCO
no code implementations • NAACL 2022 • Yang Xiao, Jinlan Fu, See-Kiong Ng, PengFei Liu
In this paper, we ask the research question of whether all the datasets in the benchmark are necessary.
1 code implementation • 29 Apr 2022 • Jinlan Fu, See-Kiong Ng, PengFei Liu
This paper aims for a potential architectural improvement for multilingual learning and asks: Can different tasks from different languages be modeled in a monolithic framework, i. e. without any task/language-specific module?
2 code implementations • ACL 2022 • Yixin Liu, PengFei Liu, Dragomir Radev, Graham Neubig
Abstractive summarization models are commonly trained using maximum likelihood estimation, which assumes a deterministic (one-point) target distribution in which an ideal model will assign all the probability mass to the reference summary.
Ranked #2 on
Text Summarization
on X-Sum
no code implementations • ACL 2022 • Yang Xiao, Jinlan Fu, Weizhe Yuan, Vijay Viswanathan, Zhoumianze Liu, Yixin Liu, Graham Neubig, PengFei Liu
Despite data's crucial role in machine learning, most existing tools and research tend to focus on systems on top of existing data rather than how to interpret and manipulate data.
no code implementations • 27 Jan 2022 • Chunyong Yang, PengFei Liu, Yanli Chen, Hongbin Wang, Min Liu
The end to end TTS system is VITS, and the pre-training self-supervised model is wav2vec 2. 0.
1 code implementation • 17 Jan 2022 • PengFei Liu, Kun Li, Helen Meng
Emotion recognition is a challenging and actively-studied research area that plays a critical role in emotion-aware human-computer interaction systems.
1 code implementation • 28 Jul 2021 • PengFei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, Graham Neubig
This paper surveys and organizes research works in a new paradigm in natural language processing, which we dub "prompt-based learning".
1 code implementation • NeurIPS 2021 • Weizhe Yuan, Graham Neubig, PengFei Liu
In this work, we conceptualize the evaluation of generated text as a text generation problem, modeled using pre-trained sequence-to-sequence models.
1 code implementation • Findings (ACL) 2021 • Priyam Tejaswin, Dhruv Naik, PengFei Liu
(2) The performance of models and reliability of metrics is dependent on sample complexity.
1 code implementation • ACL 2021 • Vijay Viswanathan, Graham Neubig, PengFei Liu
Automatically extracting key information from scientific documents has the potential to help scientists work more efficiently and accelerate the pace of scientific progress.
2 code implementations • ACL 2021 • Yixin Liu, PengFei Liu
In this paper, we present a conceptually simple while empirically powerful framework for abstractive summarization, SimCLS, which can bridge the gap between the learning objective and evaluation metrics resulting from the currently dominated sequence-to-sequence learning framework by formulating text generation as a reference-free evaluation problem (i. e., quality estimation) assisted by contrastive learning.
Ranked #4 on
Text Summarization
on X-Sum
1 code implementation • ACL 2021 • Jinlan Fu, Xuanjing Huang, PengFei Liu
Recent years have seen the paradigm shift of Named Entity Recognition (NER) systems from sequence labeling to span prediction.
1 code implementation • 30 Apr 2021 • PengFei Liu, Kun Li, Helen Meng
User queries for a real-world dialog system may sometimes fall outside the scope of the system's capabilities, but appropriate system responses will enable smooth processing throughout the human-computer interaction.
1 code implementation • 25 Apr 2021 • PengFei Liu, Youzhang Ning, King Keung Wu, Kun Li, Helen Meng
This paper presents an unsupervised two-stage approach to discover intents and generate meaningful intent labels automatically from a collection of unlabeled utterances in a domain.
1 code implementation • NAACL 2021 • Yixin Liu, Zi-Yi Dou, PengFei Liu
Although some recent works show potential complementarity among different state-of-the-art systems, few works try to investigate this problem in text summarization.
1 code implementation • EMNLP 2021 • Sebastian Ruder, Noah Constant, Jan Botha, Aditya Siddhant, Orhan Firat, Jinlan Fu, PengFei Liu, Junjie Hu, Dan Garrette, Graham Neubig, Melvin Johnson
While a sizeable gap to human-level performance remains, improvements have been easier to achieve in some tasks than in others.
1 code implementation • ACL 2021 • PengFei Liu, Jinlan Fu, Yang Xiao, Weizhe Yuan, Shuaicheng Chang, Junqi Dai, Yixin Liu, Zihuiwen Ye, Zi-Yi Dou, Graham Neubig
In this paper, we present a new conceptualization and implementation of NLP evaluation: the ExplainaBoard, which in addition to inheriting the functionality of the standard leaderboard, also allows researchers to (i) diagnose strengths and weaknesses of a single system (e. g.~what is the best-performing system bad at?)
1 code implementation • NAACL 2021 • Junqi Dai, Hang Yan, Tianxiang Sun, PengFei Liu, Xipeng Qiu
In this paper, we firstly compare the induced trees from PTMs and the dependency parsing trees on several popular models for the ABSA task, showing that the induced tree from fine-tuned RoBERTa (FT-RoBERTa) outperforms the parser-provided tree.
no code implementations • NAACL 2021 • Jinlan Fu, Liangjing Feng, Qi Zhang, Xuanjing Huang, PengFei Liu
The development of neural networks and pretraining techniques has spawned many sentence-level tagging systems that achieved superior performance on typical benchmarks.
1 code implementation • EACL 2021 • Zihuiwen Ye, PengFei Liu, Jinlan Fu, Graham Neubig
We perform an analysis of four types of NLP tasks, and both demonstrate the feasibility of fine-grained performance prediction and the necessity to perform reliability analysis for performance prediction methods in the future.
1 code implementation • 30 Jan 2021 • Weizhe Yuan, PengFei Liu, Graham Neubig
The rapid development of science and technology has been accompanied by an exponential growth in peer-reviewed scientific publications.
no code implementations • 7 Jan 2021 • Yufei Zhao, Qiushi Yao, PengFei Liu, Jingzhi Han, Zhi Wang, Qihang Liu
The kernel of the study of magnetic quantum materials focuses on the magnetic phase transitions, among which the most common phenomenon is the transition between low-temperature magnetic-ordered phase to high-temperature paramagnetic phase.
Materials Science
2 code implementations • EMNLP 2020 • Jinlan Fu, PengFei Liu, Graham Neubig
With the proliferation of models for natural language processing tasks, it is even harder to understand the differences between models and their relative merits.
1 code implementation • EMNLP 2020 • Jinlan Fu, PengFei Liu, Qi Zhang, Xuanjing Huang
The performance of the Chinese Word Segmentation (CWS) systems has gradually reached a plateau with the rapid development of deep neural networks, especially the successful use of large pre-trained models.
no code implementations • COLING 2020 • Manik Bhandari, Pranav Gour, Atabak Ashfaq, PengFei Liu
In text summarization, evaluating the efficacy of automatic metrics without human judgments has become recently popular.
1 code implementation • NAACL 2021 • Zi-Yi Dou, PengFei Liu, Hiroaki Hayashi, Zhengbao Jiang, Graham Neubig
Neural abstractive summarization models are flexible and can produce coherent summaries, but they are sometimes unfaithful and can be difficult to control.
1 code implementation • EMNLP 2020 • Manik Bhandari, Pranav Gour, Atabak Ashfaq, PengFei Liu, Graham Neubig
Automated evaluation metrics as a stand-in for manual evaluation are an essential part of the development of text-generation tasks such as text summarization.
2 code implementations • Findings of the Association for Computational Linguistics 2020 • Yiran Chen, PengFei Liu, Ming Zhong, Zi-Yi Dou, Danqing Wang, Xipeng Qiu, Xuanjing Huang
In this paper, we perform an in-depth analysis of characteristics of different datasets and investigate the performance of different summarization models under a cross-dataset setting, in which a summarizer trained on one corpus will be evaluated on a range of out-of-domain corpora.
1 code implementation • ACL 2020 • Danqing Wang, PengFei Liu, Yining Zheng, Xipeng Qiu, Xuanjing Huang
An intuitive way is to put them in the graph-based neural network, which has a more complex structure for capturing inter-sentence relationships.
1 code implementation • 20 Apr 2020 • Yong He, PengFei Liu, Xinsheng Zhang, Wang Zhou
We construct a Median-of-Means (MOM) estimator for the centered log-ratio covariance matrix and propose a thresholding procedure that is adaptive to the variability of individual entries.
Methodology
2 code implementations • ACL 2020 • Ming Zhong, PengFei Liu, Yiran Chen, Danqing Wang, Xipeng Qiu, Xuanjing Huang
This paper creates a paradigm shift with regard to the way we build neural extractive summarization systems.
Ranked #1 on
Text Summarization
on BBC XSum
1 code implementation • 12 Jan 2020 • Jinlan Fu, PengFei Liu, Qi Zhang, Xuanjing Huang
While neural network-based models have achieved impressive performance on a large body of NLP tasks, the generalization behavior of different models remains poorly understood: Does this excellent performance imply a perfect generalization model, or are there still some limitations?
2 code implementations • ECCV 2020 • Peixuan Li, Huaici Zhao, PengFei Liu, Feidao Cao
Different from these approaches, our method predicts the nine perspective keypoints of a 3D bounding box in image space, and then utilize the geometric relationship of 3D and 2D perspectives to recover the dimension, location, and orientation in 3D space.
Ranked #6 on
Vehicle Pose Estimation
on KITTI Cars Hard
no code implementations • 7 Jan 2020 • PengFei Liu, Yimin Liu, Tianyao Huang, Yuxiang Lu, Xiqin Wang
In this paper, a decentralized spectrum allocation approach is presented to avoid mutual interference among automotive radars.
no code implementations • TACL 2020 • Ji Zhang, Chengyao Chen, PengFei Liu, Chao He, Cane Wing-Ki Leung
Second, it shows a strong advantage in determining the sentiment of a target when the context sentence contains multiple semantic segments.
no code implementations • 2 Dec 2019 • Qipeng Guo, Xipeng Qiu, PengFei Liu, xiangyang xue, Zheng Zhang
In this paper, we introduce the prior knowledge, multi-scale structure, into self-attention modules.
1 code implementation • 12 Nov 2019 • Tianxiang Sun, Yunfan Shao, Xiaonan Li, PengFei Liu, Hang Yan, Xipeng Qiu, Xuanjing Huang
Most existing deep multi-task learning models are based on parameter sharing, such as hard sharing, hierarchical sharing, and soft sharing.
no code implementations • WS 2019 • Ming Zhong, Danqing Wang, PengFei Liu, Xipeng Qiu, Xuanjing Huang
In this paper, we take stock of the current state of summarization datasets and explore how different factors of datasets influence the generalization behaviour of neural extractive summarization models.
no code implementations • 25 Sep 2019 • Jin Zhang, Weipeng Ming, PengFei Liu
In the first stage, this method locates and recognizes the math symbols of input image by object detection algorithm.
no code implementations • 25 Sep 2019 • Jinlan Fu, PengFei Liu, Xuanjing Huang
With the proliferation of models for natural language processing (NLP) tasks, it is even harder to understand the differences between models and their relative merits.
no code implementations • 30 Aug 2019 • Danqing Wang, PengFei Liu, Ming Zhong, Jie Fu, Xipeng Qiu, Xuanjing Huang
Although domain shift has been well explored in many NLP applications, it still has received little attention in the domain of extractive text summarization.
1 code implementation • 29 Aug 2019 • Shuaichen Chang, PengFei Liu, Yun Tang, Jing Huang, Xiaodong He, Bo-Wen Zhou
Recent years have seen great success in the use of neural seq2seq models on the text-to-SQL task.
no code implementations • 25 Jul 2019 • Lin Zehui, PengFei Liu, Luyao Huang, Junkun Chen, Xipeng Qiu, Xuanjing Huang
Variants dropout methods have been designed for the fully-connected layer, convolutional layer and recurrent layer in neural networks, and shown to be effective to avoid overfitting.
2 code implementations • ACL 2019 • Ming Zhong, PengFei Liu, Danqing Wang, Xipeng Qiu, Xuanjing Huang
The recent years have seen remarkable success in the use of deep neural networks on text summarization.
Ranked #6 on
Extractive Text Summarization
on CNN / Daily Mail
1 code implementation • ACL 2019 • Dayiheng Liu, Jie Fu, PengFei Liu, Jiancheng Lv
Text infilling is defined as a task for filling in the missing part of a sentence or paragraph, which is suitable for many real-world natural language generation scenarios.
no code implementations • 24 Apr 2019 • PengFei Liu, Yimin Liu, Tianyao Huang, Yuxiang Lu, Xiqin Wang
The concept of cognitive radar (CR) enables radar systems to achieve intelligent adaption to a changeable environment with feedback facility from receiver to transmitter.
2 code implementations • NAACL 2019 • Qipeng Guo, Xipeng Qiu, PengFei Liu, Yunfan Shao, xiangyang xue, Zheng Zhang
Although Transformer has achieved great successes on many NLP tasks, its heavy structure with fully-connected attention connections leads to dependencies on large training data.
Ranked #12 on
Sentiment Analysis
on SST-5 Fine-grained classification
Named Entity Recognition (NER)
Natural Language Inference
+2
1 code implementation • 28 Dec 2018 • Pengfei Liu
Understanding the phenotypic drug response on cancer cell lines plays a vital rule in anti-cancer drug discovery and re-purposing.
no code implementations • 26 Nov 2018 • Pengfei Liu, Jie Fu, Yue Dong, Xipeng Qiu, Jackie Chi Kit Cheung
We present two architectures for multi-task learning with neural sequence models.
no code implementations • 21 Nov 2018 • Pengfei Liu, Shuaichen Chang, Xuanjing Huang, Jian Tang, Jackie Chi Kit Cheung
Recently, a large number of neural mechanisms and models have been proposed for sequence learning, of which self-attention, as exemplified by the Transformer model, and graph neural networks (GNNs) have attracted much attention.
no code implementations • 23 Oct 2018 • Pengfei Liu, Xuanjing Huang
In this paper, we describe a general framework: Parameters Read-Write Networks (PRaWNs) to systematically analyze current neural models for multi-task learning, in which we find that existing models expect to disentangle features into different spaces while features learned in practice are still entangled in shared space, leaving potential hazards for other training or unseen tasks.
no code implementations • 8 Aug 2018 • Pengfei Liu, Ji Zhang, Cane Wing-Ki Leung, Chao He, Thomas L. Griffiths
Effective representation of a text is critical for various natural language processing tasks.
no code implementations • 25 Feb 2018 • Junkun Chen, Xipeng Qiu, Pengfei Liu, Xuanjing Huang
Specifically, we use a shared meta-network to capture the meta-knowledge of semantic composition and generate the parameters of the task-specific semantic composition models.
no code implementations • EMNLP 2017 • Pengfei Liu, Kaiyu Qian, Xipeng Qiu, Xuanjing Huang
Idioms are peculiar linguistic constructions that impose great challenges for representing the semantics of language, especially in current prevailing end-to-end neural models, which assume that the semantics of a phrase or sentence can be literally composed from its constitutive words.
no code implementations • 11 May 2017 • Pengfei Liu, Xipeng Qiu, Xuanjing Huang
Tree-structured neural networks have proven to be effective in learning semantic representations by exploiting syntactic information.
no code implementations • ACL 2017 • Pengfei Liu, Xipeng Qiu, Xuanjing Huang
Neural network models have shown their promising opportunities for multi-task learning, which focus on learning the shared layers to extract the common and task-invariant features.
no code implementations • 23 Sep 2016 • Pengfei Liu, Xipeng Qiu, Xuanjing Huang
Neural network based models have achieved impressive results on various specific tasks.
no code implementations • 22 Jul 2016 • PengFei Liu, Xipeng Qiu, Xuanjing Huang
Introducing attentional mechanism in neural network is a powerful concept, and has achieved impressive results in many natural language processing tasks.
no code implementations • EMNLP 2016 • Pengfei Liu, Xipeng Qiu, Xuanjing Huang
Recently, there is rising interest in modelling the interactions of two sentences with deep neural networks.
Ranked #73 on
Natural Language Inference
on SNLI
no code implementations • 17 May 2016 • Pengfei Liu, Xipeng Qiu, Xuanjing Huang
Neural network based methods have obtained great progress on a variety of natural language processing tasks.
Ranked #10 on
Emotion Recognition in Conversation
on CPED
Emotion Recognition in Conversation
General Classification
+3